27 research outputs found

    Intelligent Management of Hierarchical Behaviors Using a NAO Robot as a Vocational Tutor

    Get PDF
    In order to create an intelligent system which can hold an interview using the NAO robot as an interviewer playing the role of a vocational tutor were classified and categorized twenty behaviors within five personality profiles. Five basic emotions are considered: Anger, boredom, interest, surprise and joy. Selected behaviors are grouped according to these five different emotions. Common behaviors (e.g., movements or body postures) used by the robot (who assumes the role of vocational tutor) during vocational guidance sessions will be based on a theory of personality traits called the "Five Factor Model". In this context, a predefined set of questions will be asked by the robot according to a theoretical model called "Orientation Model" about the person's vocational preferences. Therefore, NAO can react as conveniently as possible during the interview according to the score of the answer given by the person to the question posed and its personality type. Additionally, based on the answers to these questions, it is established a vocational profile and the robot can to emit a recommendation about person vocation. The results obtained show how the intelligent selection of behaviors can be successfully achieved through the proposed approach, making the interaction between a human and a robot friendlier

    Classification approaches for microarray gene expression data analysis

    Get PDF
    The technology of Microarray is among the vital technological advancements in bioinformatics. Usually, microarray data is characterized by noisiness as well as increased dimensionality. Therefore, data, that is finely tuned, is a requirement for conducting the microarray data analysis. Classification of biological samples represents the most performed analysis on microarray data. This study is focused on the determination of the confidence level used for the classification of a sample of an unknown gene based on microarray data. A support vector machine classifier (SVM) was applied, and the results compared with other classifiers including K-nearest neighbor (KNN) and neural network (NN). Four datasets of microarray data including leukemia data set, prostate dataset, colon dataset, and breast dataset were used in the research. Additionally, the study analyzed two different kernels of SVM. These were radial kernel and linear kernels. The analysis was conducted by varying percentages of dataset distribution coupled with training and test datasets in order to make sure that the best positive sets of data provided the best results. The 10-fold cross validation method (LOOCV) and the L1 L2 techniques of regularization were used to get solutions for the over-fitting issues as well as feature selection in classification. The ROC curve and a confusion matrix were applied in performance assessment. K-nearest neighbor and neural network classifiers were trained with similar sets of data and comparison of the results was done. The results showed that the SVM exceeded the performance and accuracy compared to other classifiers. For each set of data, support vector machine was the best functional method based on the linear kernel since it yielded better results than the other methods. The highest accuracy of colon data was 83% with SVM classifier, while the accuracy of NN with the same data was 77% and KNN was 72%. Leukemia data had the highest accuracy of 97% with SVM, 85% with NN, and 91% with KNN. For breast data, the highest accuracy was 73% with SVM-L2, while the accuracy was 56% with NN and 47% with KNN. Finally, the highest accuracy of prostate data was 80% with SVM-L1, while the accuracy was 75% with NN and 66% with KNN. It showed the highest accuracy as well as the area under curve compared to k-nearest neighbor and neural network in the three different tests.Master of Science (MSc) in Computational Science

    Neural networks in multiphase reactors data mining: feature selection, prior knowledge, and model design

    Get PDF
    Les réseaux de neurones artificiels (RNA) suscitent toujours un vif intérêt dans la plupart des domaines d’ingénierie non seulement pour leur attirante « capacité d’apprentissage » mais aussi pour leur flexibilité et leur bonne performance, par rapport aux approches classiques. Les RNA sont capables «d’approximer» des relations complexes et non linéaires entre un vecteur de variables d’entrées x et une sortie y. Dans le contexte des réacteurs multiphasiques le potentiel des RNA est élevé car la modélisation via la résolution des équations d’écoulement est presque impossible pour les systèmes gaz-liquide-solide. L’utilisation des RNA dans les approches de régression et de classification rencontre cependant certaines difficultés. Un premier problème, général à tous les types de modélisation empirique, est celui de la sélection des variables explicatives qui consiste à décider quel sous-ensemble xs ⊂ x des variables indépendantes doit être retenu pour former les entrées du modèle. Les autres difficultés à surmonter, plus spécifiques aux RNA, sont : le sur-apprentissage, l’ambiguïté dans l’identification de l’architecture et des paramètres des RNA et le manque de compréhension phénoménologique du modèle résultant. Ce travail se concentre principalement sur trois problématiques dans l’utilisation des RNA: i) la sélection des variables, ii) l’utilisation de la connaissance apriori, et iii) le design du modèle. La sélection des variables, dans le contexte de la régression avec des groupes adimensionnels, a été menée avec les algorithmes génétiques. Dans le contexte de la classification, cette sélection a été faite avec des méthodes séquentielles. Les types de connaissance a priori que nous avons insérés dans le processus de construction des RNA sont : i) la monotonie et la concavité pour la régression, ii) la connectivité des classes et des coûts non égaux associés aux différentes erreurs, pour la classification. Les méthodologies développées dans ce travail ont permis de construire plusieurs modèles neuronaux fiables pour les prédictions de la rétention liquide et de la perte de charge dans les colonnes garnies à contre-courant ainsi que pour la prédiction des régimes d’écoulement dans les colonnes garnies à co-courant.Artificial neural networks (ANN) have recently gained enormous popularity in many engineering fields, not only for their appealing “learning ability, ” but also for their versatility and superior performance with respect to classical approaches. Without supposing a particular equational form, ANNs mimic complex nonlinear relationships that might exist between an input feature vector x and a dependent (output) variable y. In the context of multiphase reactors the potential of neural networks is high as the modeling by resolution of first principle equations to forecast sought key hydrodynamics and transfer characteristics is intractable. The general-purpose applicability of neural networks in regression and classification, however, poses some subsidiary difficulties that can make their use inappropriate for certain modeling problems. Some of these problems are general to any empirical modeling technique, including the feature selection step, in which one has to decide which subset xs ⊂ x should constitute the inputs (regressors) of the model. Other weaknesses specific to the neural networks are overfitting, model design ambiguity (architecture and parameters identification), and the lack of interpretability of resulting models. This work addresses three issues in the application of neural networks: i) feature selection ii) prior knowledge matching within the models (to answer to some extent the overfitting and interpretability issues), and iii) the model design. Feature selection was conducted with genetic algorithms (yet another companion from artificial intelligence area), which allowed identification of good combinations of dimensionless inputs to use in regression ANNs, or with sequential methods in a classification context. The type of a priori knowledge we wanted the resulting ANN models to match was the monotonicity and/or concavity in regression or class connectivity and different misclassification costs in classification. Even the purpose of the study was rather methodological; some resulting ANN models might be considered contributions per se. These models-- direct proofs for the underlying methodologies-- are useful for predicting liquid hold-up and pressure drop in counter-current packed beds and flow regime type in trickle beds

    Particle Swarm Optimization

    Get PDF
    Particle swarm optimization (PSO) is a population based stochastic optimization technique influenced by the social behavior of bird flocking or fish schooling.PSO shares many similarities with evolutionary computation techniques such as Genetic Algorithms (GA). The system is initialized with a population of random solutions and searches for optima by updating generations. However, unlike GA, PSO has no evolution operators such as crossover and mutation. In PSO, the potential solutions, called particles, fly through the problem space by following the current optimum particles. This book represents the contributions of the top researchers in this field and will serve as a valuable tool for professionals in this interdisciplinary field

    Tangible auditory interfaces : combining auditory displays and tangible interfaces

    Get PDF
    Bovermann T. Tangible auditory interfaces : combining auditory displays and tangible interfaces. Bielefeld (Germany): Bielefeld University; 2009.Tangible Auditory Interfaces (TAIs) investigates into the capabilities of the interconnection of Tangible User Interfaces and Auditory Displays. TAIs utilise artificial physical objects as well as soundscapes to represent digital information. The interconnection of the two fields establishes a tight coupling between information and operation that is based on the human's familiarity with the incorporated interrelations. This work gives a formal introduction to TAIs and shows their key features at hand of seven proof of concept applications

    A framework for advanced processing of dynamic X-ray micro-CT data

    Get PDF

    Research and technology, 1993. Salute to Skylab and Spacelab: Two decades of discovery

    Get PDF
    A summary description of Skylab and Spacelab is presented. The section on Advanced Studies includes projects in space science, space systems, commercial use of space, and transportation systems. Within the Research Programs area, programs are listed under earth systems science, space physics, astrophysics, and microgravity science and applications. Technology Programs include avionics, materials and manufacturing processes, mission operations, propellant and fluid management, structures and dynamics, and systems analysis and integration. Technology transfer opportunities and success are briefly described. A glossary of abbreviations and acronyms is appended as is a list of contract personnel within the program areas

    The polarimetric and helioseismic imager on solar orbiter

    Get PDF
    This paper describes the Polarimetric and Helioseismic Imager on the Solar Orbiter mission (SO/PHI), the first magnetograph and helioseismology instrument to observe the Sun from outside the Sun-Earth line. It is the key instrument meant to address the top-level science question: How does the solar dynamo work and drive connections between the Sun and the heliosphere? SO/PHI will also play an important role in answering the other top-level science questions of Solar Orbiter, as well as hosting the potential of a rich return in further science. SO/PHI measures the Zeeman effect and the Doppler shift in the FeI 617.3nm spectral line. To this end, the instrument carries out narrow-band imaging spectro-polarimetry using a tunable LiNbO_3 Fabry-Perot etalon, while the polarisation modulation is done with liquid crystal variable retarders (LCVRs). The line and the nearby continuum are sampled at six wavelength points and the data are recorded by a 2kx2k CMOS detector. To save valuable telemetry, the raw data are reduced on board, including being inverted under the assumption of a Milne-Eddington atmosphere, although simpler reduction methods are also available on board. SO/PHI is composed of two telescopes; one, the Full Disc Telescope (FDT), covers the full solar disc at all phases of the orbit, while the other, the High Resolution Telescope (HRT), can resolve structures as small as 200km on the Sun at closest perihelion. The high heat load generated through proximity to the Sun is greatly reduced by the multilayer-coated entrance windows to the two telescopes that allow less than 4% of the total sunlight to enter the instrument, most of it in a narrow wavelength band around the chosen spectral line

    Thermomechanical Behaviour of Two Heterogeneous Tungsten Materials via 2D and 3D Image-Based FEM

    Get PDF
    corecore