19 research outputs found

    Learning to compare nodes in branch and bound with graph neural networks

    Full text link
    En informatique, la résolution de problèmes NP-difficiles en un temps raisonnable est d’une grande importance : optimisation de la chaîne d’approvisionnement, planification, routage, alignement de séquences biologiques multiples, inference dans les modèles graphiques pro- babilistes, et même certains problèmes de cryptographie sont tous des examples de la classe NP-complet. En pratique, nous modélisons beaucoup d’entre eux comme un problème d’op- timisation en nombre entier, que nous résolvons à l’aide de la méthodologie séparation et évaluation. Un algorithme de ce style divise un espace de recherche pour l’explorer récursi- vement (séparation), et obtient des bornes d’optimalité en résolvant des relaxations linéaires sur les sous-espaces (évaluation). Pour spécifier un algorithme, il faut définir plusieurs pa- ramètres, tel que la manière d’explorer les espaces de recherche, de diviser une recherche l’espace une fois exploré, ou de renforcer les relaxations linéaires. Ces politiques peuvent influencer considérablement la performance de résolution. Ce travail se concentre sur une nouvelle manière de dériver politique de recherche, c’est à dire le choix du prochain sous-espace à séparer étant donné une partition en cours, en nous servant de l’apprentissage automatique profond. Premièrement, nous collectons des données résumant, sur une collection de problèmes donnés, quels sous-espaces contiennent l’optimum et quels ne le contiennent pas. En représentant ces sous-espaces sous forme de graphes bipartis qui capturent leurs caractéristiques, nous entraînons un réseau de neurones graphiques à déterminer la probabilité qu’un sous-espace contienne la solution optimale par apprentissage supervisé. Le choix d’un tel modèle est particulièrement utile car il peut s’adapter à des problèmes de différente taille sans modifications. Nous montrons que notre approche bat celle de nos concurrents, consistant à des modèles d’apprentissage automatique plus simples entraînés à partir des statistiques du solveur, ainsi que la politique par défaut de SCIP, un solveur open-source compétitif, sur trois familles NP-dures: des problèmes de recherche de stables de taille maximum, de flots de réseau multicommodité à charge fixe, et de satisfiabilité maximum.In computer science, solving NP-hard problems in a reasonable time is of great importance, such as in supply chain optimization, scheduling, routing, multiple biological sequence align- ment, inference in probabilistic graphical models, and even some problems in cryptography. In practice, we model many of them as a mixed integer linear optimization problem, which we solve using the branch and bound framework. An algorithm of this style divides a search space to explore it recursively (branch) and obtains optimality bounds by solving linear relaxations in such sub-spaces (bound). To specify an algorithm, one must set several pa- rameters, such as how to explore search spaces, how to divide a search space once it has been explored, or how to tighten these linear relaxations. These policies can significantly influence resolution performance. This work focuses on a novel method for deriving a search policy, that is, a rule for select- ing the next sub-space to explore given a current partitioning, using deep machine learning. First, we collect data summarizing which subspaces contain the optimum, and which do not. By representing these sub-spaces as bipartite graphs encoding their characteristics, we train a graph neural network to determine the probability that a subspace contains the optimal so- lution by supervised learning. The choice of such design is particularly useful as the machine learning model can automatically adapt to problems of different sizes without modifications. We show that our approach beats the one of our competitors, consisting of simpler machine learning models trained from solver statistics, as well as the default policy of SCIP, a state- of-the-art open-source solver, on three NP-hard benchmarks: generalized independent set, fixed-charge multicommodity network flow, and maximum satisfiability problems

    Neurobiological markers for remission and persistence of childhood attention-deficit/hyperactivity disorder

    Get PDF
    Attention-deficit/hyperactivity disorder (ADHD) is one of the most prevalent neurodevelopmental disorders in children. Symptoms of childhood ADHD persist into adulthood in around 65% of patients, which elevates the risk for a number of adverse outcomes, resulting in substantial individual and societal burden. A neurodevelopmental double dissociation model is proposed based on existing studies in which the early onset of childhood ADHD is suggested to associate with dysfunctional subcortical structures that remain static throughout the lifetime; while diminution of symptoms over development could link to optimal development of prefrontal cortex. Current existing studies only assess basic measures including regional brain activation and connectivity, which have limited capacity to characterize the functional brain as a high performance parallel information processing system, the field lacks systems-level investigations of the structural and functional patterns that significantly contribute to the symptom remission and persistence in adults with childhood ADHD. Furthermore, traditional statistical methods estimate group differences only within a voxel or region of interest (ROI) at a time without having the capacity to explore how ROIs interact in linear and/or non-linear ways, as they quickly become overburdened when attempting to combine predictors and their interactions from high-dimensional imaging data set. This dissertation is the first study to apply ensemble learning techniques (ELT) in multimodal neuroimaging features from a sample of adults with childhood ADHD and controls, who have been clinically followed up since childhood. A total of 36 adult probands who were diagnosed with ADHD combined-type during childhood and 36 matched normal controls (NCs) are involved in this dissertation research. Thirty-six adult probands are further split into 18 remitters (ADHD-R) and 18 persisters (ADHD-P) based on the symptoms in their adulthood from DSM-IV ADHD criteria. Cued attention task-based fMRI, structural MRI, and diffusion tensor imaging data from each individual are analyzed. The high-dimensional neuroimaging features, including pair-wise regional connectivity and global/nodal topological properties of the functional brain network for cue-evoked attention process, regional cortical thickness and surface area, subcortical volume, volume and fractional anisotropy of major white matter fiber tract for each subject are calculated. In addition, all the currently available optimization strategies for ensemble learning techniques (i.e., voting, bagging, boosting and stacking techniques) are tested in a pool of semi-final classification results generated by seven basic classifiers, including K-Nearest Neighbors, support vector machine (SVM), logistic regression, NaĂŻve Bayes, linear discriminant analysis, random forest, and multilayer perceptron. As hypothesized, results indicate that the features of nodal efficiency in right inferior frontal gyrus, right middle frontal (MFG)-inferior parietal (IPL) functional connectivity, and right amygdala volume significantly contributed to accurate discrimination between ADHD probands and controls; higher nodal efficiency of right MFG greatly contributed to inattentive and hyperactive/impulsive symptom remission, while higher right MFG-IPL functional connectivity strongly linked to symptom persistence in adults with childhood ADHD. The utilization of ELTs indicates that the bagging-based ELT with the base model of SVM achieves the best results, with the most significant improvement of the area under the receiver of operating characteristic curve (0.89 for ADHD probands vs. NCs, and 0.9 for ADHD-P vs. ADHD-R). The outcomes of this dissertation research have considerable value for the development of novel interventions that target mechanisms associated with recovery

    Advanced techniques for classification of polarimetric synthetic aperture radar data

    Get PDF
    With various remote sensing technologies to aid Earth Observation, radar-based imaging is one of them gaining major interests due to advances in its imaging techniques in form of syn-thetic aperture radar (SAR) and polarimetry. The majority of radar applications focus on mon-itoring, detecting, and classifying local or global areas of interests to support humans within their efforts of decision-making, analysis, and interpretation of Earth’s environment. This thesis focuses on improving the classification performance and process particularly concerning the application of land use and land cover over polarimetric SAR (PolSAR) data. To achieve this, three contributions are studied related to superior feature description and ad-vanced machine-learning techniques including classifiers, principles, and data exploitation. First, this thesis investigates the application of color features within PolSAR image classi-fication to provide additional discrimination on top of the conventional scattering information and texture features. The color features are extracted over the visual presentation of fully and partially polarimetric SAR data by generation of pseudo color images. Within the experiments, the obtained results demonstrated that with the addition of the considered color features, the achieved classification performances outperformed results with common PolSAR features alone as well as achieved higher classification accuracies compared to the traditional combination of PolSAR and texture features. Second, to address the large-scale learning challenge in PolSAR image classification with the utmost efficiency, this thesis introduces the application of an adaptive and data-driven supervised classification topology called Collective Network of Binary Classifiers, CNBC. This topology incorporates active learning to support human users with the analysis and interpretation of PolSAR data focusing on collections of images, where changes or updates to the existing classifier might be required frequently due to surface, terrain, and object changes as well as certain variations in capturing time and position. Evaluations demonstrated the capabilities of CNBC over an extensive set of experimental results regarding the adaptation and data-driven classification of single as well as collections of PolSAR images. The experimental results verified that the evolutionary classification topology, CNBC, did provide an efficient solution for the problems of scalability and dynamic adaptability allowing both feature space dimensions and the number of terrain classes in PolSAR image collections to vary dynamically. Third, most PolSAR classification problems are undertaken by supervised machine learn-ing, which require manually labeled ground truth data available. To reduce the manual labeling efforts, supervised and unsupervised learning approaches are combined into semi-supervised learning to utilize the huge amount of unlabeled data. The application of semi-supervised learning in this thesis is motivated by ill-posed classification tasks related to the small training size problem. Therefore, this thesis investigates how much ground truth is actually necessary for certain classification problems to achieve satisfactory results in a supervised and semi-supervised learning scenario. To address this, two semi-supervised approaches are proposed by unsupervised extension of the training data and ensemble-based self-training. The evaluations showed that significant speed-ups and improvements in classification performance are achieved. In particular, for a remote sensing application such as PolSAR image classification, it is advantageous to exploit the location-based information from the labeled training data. Each of the developed techniques provides its stand-alone contribution from different viewpoints to improve land use and land cover classification. The introduction of a new fea-ture for better discrimination is independent of the underlying classification algorithms used. The application of the CNBC topology is applicable to various classification problems no matter how the underlying data have been acquired, for example in case of remote sensing data. Moreover, the semi-supervised learning approach tackles the challenge of utilizing the unlabeled data. By combining these techniques for superior feature description and advanced machine-learning techniques exploiting classifier topologies and data, further contributions to polarimetric SAR image classification are made. According to the performance evaluations conducted including visual and numerical assessments, the proposed and investigated tech-niques showed valuable improvements and are able to aid the analysis and interpretation of PolSAR image data. Due to the generic nature of the developed techniques, their applications to other remote sensing data will require only minor adjustments

    Complex land cover classifications and physical properties retrieval of tropical forests using multi-source remote sensing

    Get PDF
    The work presented in this thesis mainly focuses on two subjects related to the application of remote sensing data: (1) for land cover classification combining optical sensor, texture features generated from spectral information and synthetic aperture radar (SAR) features, and (2) to develop a non-destructive approach for above ground biomass (AGB) and forest attributes estimation employing multi-source remote sensing data (i.e. optical data, SAR backscatter) combined with in-situ data. Information provided by reliable land cover map is useful for management of forest resources to support sustainable forest management, whereas the generation of the non-destructive approach to model forest biophysical properties (e.g. AGB and stem volume) is required to assess the forest resources more efficiently and cost-effective, and coupled with remote sensing data the model can be applied over large forest areas. This work considers study sites over tropical rain forest landscape in Indonesia characterized by different successional stages and complex vegetation structure including tropical peatland forests. The thesis begins with a brief introduction and the state of the art explaining recent trends on monitoring and modeling of forest resources using remote sensing data and approach. The research works on the integration of spectral information and texture features for forest cover mapping is presented subsequently, followed by development of a non-destructive approach for AGB and forest parameters predictions and modeling. Ultimately, this work evaluates the potential of mosaic SAR data for AGB modeling and the fusion of optical and SAR data for peatlands discrimination. The results show that the inclusion of geostatistics texture features improved the classification accuracy of optical Landsat ETM data. Moreover, the fusion of SAR and optical data enhanced the peatlands discrimination over tropical peat swamp forest. For forest stand parameters modeling, neural networks method resulted in lower error estimate than standard multi-linear regression technique, and the combination of non-destructive measurement (i.e. stem number) and remote sensing data improved the model accuracy. The up scaling of stem volume and biomass estimates using Kriging method and bi-temporal ETM image also provide favorable estimate results upon comparison with the land cover map.Die in dieser Dissertation präsentierten Ergebnisse konzentrieren sich hauptsächlich auf zwei Themen mit Bezug zur angewandten Fernerkundung: 1) Der Klassifizierung von Oberflächenbedeckung basierend auf der Verknüpfung von optischen Sensoren, Textureigenschaften erzeugt durch Spektraldaten und Synthetic-Aperture-Radar (SAR) features und 2) die Entwicklung eines nichtdestruktiven Verfahrens zur Bestimmung oberirdischer Biomasse (AGB) und weiterer Waldeigenschaften mittels multi-source Fernerkundungsdaten (optische Daten, SAR Rückstreuung) sowie in-situ Daten. Eine zuverlässige Karte der Landbedeckung dient der Unterstützung von nachhaltigem Waldmanagement, während eine nichtdestruktive Herangehensweise zur Modellierung von biophysikalischen Waldeigenschaften (z.B. AGB und Stammvolumen) für eine effiziente und kostengünstige Beurteilung der Waldressourcen notwendig ist. Durch die Kopplung mit Fernerkundungsdaten kann das Modell auf große Waldflächen übertragen werden. Die vorliegende Arbeit berücksichtigt Untersuchungsgebiete im tropischen Regenwald Indonesiens, welche durch verschiedene Regenerations- und Sukzessionsstadien sowie komplexe Vegetationsstrukturen, inklusive tropischer Torfwälder, gekennzeichnet sind. Am Anfang der Arbeit werden in einer kurzen Einleitung der Stand der Forschung und die neuesten Forschungstrends in der Überwachung und Modellierung von Waldressourcen mithilfe von Fernerkundungsdaten dargestellt. Anschließend werden die Forschungsergebnisse der Kombination von Spektraleigenschaften und Textureigenschaften zur Waldbedeckungskartierung erläutert. Desweiteren folgen Ergebnisse zur Entwicklung eines nichtdestruktiven Ansatzes zur Vorhersage und Modellierung von AGB und Waldeigenschaften, zur Auswertung von Mosaik- SAR Daten für die Modellierung von AGB, sowie zur Fusion optischer mit SAR Daten für die Identifizierung von Torfwäldern. Die Ergebnisse zeigen, dass die Einbeziehung von geostatistischen Textureigenschaften die Genauigkeit der Klassifikation von optischen Landsat ETM Daten gesteigert hat. Desweiteren führte die Fusion von SAR und optischen Daten zu einer Verbesserung der Unterscheidung zwischen Torfwäldern und tropischen Sumpfwäldern. Bei der Modellierung der Waldparameter führte die Neural-Network-Methode zu niedrigeren Fehlerschätzungen als die multiple Regressions. Die Kombination von nichtdestruktiven Messungen (z.B. Stammzahl) und Fernerkundungsdaten führte zu einer Steigerung der Modellgenauigkeit. Die Hochskalierung des Stammvolumens und Schätzungen der Biomasse mithilfe von Kriging und bi-temporalen ETM Daten lieferten positive Schätzergebnisse im Vergleich zur Landbedeckungskarte

    Artificial general intelligence: Proceedings of the Second Conference on Artificial General Intelligence, AGI 2009, Arlington, Virginia, USA, March 6-9, 2009

    Get PDF
    Artificial General Intelligence (AGI) research focuses on the original and ultimate goal of AI – to create broad human-like and transhuman intelligence, by exploring all available paths, including theoretical and experimental computer science, cognitive science, neuroscience, and innovative interdisciplinary methodologies. Due to the difficulty of this task, for the last few decades the majority of AI researchers have focused on what has been called narrow AI – the production of AI systems displaying intelligence regarding specific, highly constrained tasks. In recent years, however, more and more researchers have recognized the necessity – and feasibility – of returning to the original goals of the field. Increasingly, there is a call for a transition back to confronting the more difficult issues of human level intelligence and more broadly artificial general intelligence

    Décoder l’habileté perceptive dans le cerveau humain : contenu représentationnel et computations cérébrales

    Full text link
    La capacité à reconnaître les visages de nos collègues, de nos amis et de nos proches est essentielle à notre réussite en tant qu'êtres sociaux. Notre cerveau accomplit cet exploit facilement et rapidement, dans une série d’opérations se déroulant en quelques dizaines de millisecondes à travers un vaste réseau cérébral du système visuel ventral. L’habileté à reconnaître les visages, par contre, varie considérablement d’une personne à l’autre. Certains individus, appelés «super-recognisers», sont capables de reconnaître des visages vus une seule fois dans la rue des années plus tôt. D’autres, appelés «prosopagnosiques», sont incapables de reconnaître le visage de leurs collègues ou leurs proches, même avec une vision parfaite. Une question simple reste encore largement sans réponse : quels mécanismes expliquent que certains individus sont meilleurs à reconnaître des visages? Cette thèse rapporte cinq articles étudiant les mécanismes perceptifs (articles 1, 2, 3) et cérébraux (articles 4, 5) derrière ces variations à travers différentes populations d’individus. L’article 1 décrit le contenu des représentations visuelles faciales chez une population avec un diagnostic de schizophrénie et d’anxiété sociale à l’aide d’une technique psychophysique Bubbles. Nous révélons pour la première fois les mécanismes en reconnaissance des expressions de cette population: un déficit de reconnaissance est accompagné par i) une sous-utilisation de la région des yeux des visages expressifs et ii) une sous-utilisation des détails fins. L’article 2 valide ensuite une nouvelle technique permettant de révéler simultanément le contenu visuel dans trois dimensions psychophysiques centrales pour le système visuel — la position, les fréquences spatiales, et l’orientation. L’article 3 a mesuré, à l'aide de cette nouvelle technique, le contenu représentationnel de 120 individus pendant la discrimination faciale du sexe et des expressions ( >500,000 observations). Nous avons observé de fortes corrélations entre l’habileté à discriminer le sexe et les expressions des visages, ainsi qu'entre l’habileté à discriminer le sexe et l’identité. Crucialement, plus un individu est habile en reconnaissance faciale, plus il utilise un contenu représentationnel similaire entre les tâches. L’article 4 a examiné les computations cérébrales de super-recognisers en utilisant l’électroencéphalographie haute-densité (EEG) et l’apprentissage automatique. Ces outils ont permis de décoder, pour la première fois, l’habileté en reconnaissance faciale à partir du cerveau avec jusqu’à 80% d’exactitude –– et ce à partir d’une seule seconde d’activité cérébrale. Nous avons ensuite utilisé la Representational Similarity Analysis (RSA) pour comparer les représentations cérébrales de nos participants à celles de modèles d’apprentissage profond visuels et langagiers. Les super-recognisers, comparé aux individus avec une habileté typique, ont des représentations cérébrales plus similaires aux computations visuelles et sémantiques de ces modèles optimaux. L’article 5 rapporte une investigation des computations cérébrales chez le cas le plus spécifique et documenté de prosopagnosie acquise, la patiente PS. Les mêmes outils computationnels et d’imagerie que ceux de l’article 4 ont permis i) de décoder les déficits d’identification faciale de PS à partir de son activité cérébrale EEG, et ii) de montrer pour la première fois que la prosopagnosie est associée à un déficit des computations visuelles de haut niveau et des computations cérébrales sémantiques.The ability to recognise the faces of our colleagues, friends, and family members is critical to our success as social beings. Our brains accomplish this feat with astonishing ease and speed, in a series of operations taking place in tens of milliseconds across a vast brain network of the visual system. The ability to recognise faces, however, varies considerably from one person to another. Some individuals, called "super-recognisers", are able to recognise faces seen only once years earlier. Others, called "prosopagnosics", are unable to recognise the faces of their colleagues or relatives, even with perfect vision and typical intelligence. A simple question remains largely unanswered: what mechanisms explain why some individuals are better at recognizing faces? This thesis reports five articles studying the perceptual (article 1, 2, 3) and neural (article 4, 5) mechanisms behind these variations across different populations of individuals. Article 1 describes the content of visual representations of faces in a population with a comorbid diagnosis of schizophrenia and social anxiety disorder using an established psychophysical technique, Bubbles. We reveal for the first time the perceptual mechanisms of expression recognition in this population: a recognition deficit is accompanied by i) an underutilization of the eye region of expressive faces and ii) an underutilization of fine details. Article 2 then validates a new psychophysical technique that simultaneously reveals the visual content in three dimensions central to the visual system — position, spatial frequencies, and orientation. We do not know, however, whether skilled individuals perform well across a variety of facial recognition tasks and, if so, how they accomplish this feat. Article 3 measured, using the technique validated in article 2, the perceptual representations of 120 individuals during facial discrimination of gender and expressions (total of >500,000 trials). We observed strong correlations between the ability to discriminate gender and facial expressions, as well as between the ability to discriminate gender and identify faces. More importantly, we found a positive correlation between individual ability and the similarity of perceptual representations used across these tasks. Article 4 examined differences in brain dynamics between super-recognizers and typical individuals using high-density electroencephalography (EEG) and machine learning. These tools allowed us to decode, for the first time, facial recognition ability from the brain with up to 80% accuracy — using a mere second of brain activity. We then used Representational Similarity Analysis (RSA) to compare our participants' brain representations to those of deep learning models of object and language classification. This showed that super-recognisers, compared to individuals with typical perceptual abilites, had brain representations more similar to the visual and semantic computations of these optimal models. Article 5 reports an investigation of brain computations in the most specific and documented case of acquired prosopagnosia, patient PS. The same computational tools used in article 4 enabled us to decode PS's facial identification deficits from her brain dynamics. Crucially, associations between brain deep learning models showed for the first time that prosopagnosia is associated with deficits in high-level visual and semantic brain computations

    The neuro-cognitive representation of word meaning resolved in space and time.

    Get PDF
    One of the core human abilities is that of interpreting symbols. Prompted with a perceptual stimulus devoid of any intrinsic meaning, such as a written word, our brain can access a complex multidimensional representation, called semantic representation, which corresponds to its meaning. Notwithstanding decades of neuropsychological and neuroimaging work on the cognitive and neural substrate of semantic representations, many questions are left unanswered. The research in this dissertation attempts to unravel one of them: are the neural substrates of different components of concrete word meaning dissociated? In the first part, I review the different theoretical positions and empirical findings on the cognitive and neural correlates of semantic representations. I highlight how recent methodological advances, namely the introduction of multivariate methods for the analysis of distributed patterns of brain activity, broaden the set of hypotheses that can be empirically tested. In particular, they allow the exploration of the representational geometries of different brain areas, which is instrumental to the understanding of where and when the various dimensions of the semantic space are activated in the brain. Crucially, I propose an operational distinction between motor-perceptual dimensions (i.e., those attributes of the objects referred to by the words that are perceived through the senses) and conceptual ones (i.e., the information that is built via a complex integration of multiple perceptual features). In the second part, I present the results of the studies I conducted in order to investigate the automaticity of retrieval, topographical organization, and temporal dynamics of motor-perceptual and conceptual dimensions of word meaning. First, I show how the representational spaces retrieved with different behavioral and corpora-based methods (i.e., Semantic Distance Judgment, Semantic Feature Listing, WordNet) appear to be highly correlated and overall consistent within and across subjects. Second, I present the results of four priming experiments suggesting that perceptual dimensions of word meaning (such as implied real world size and sound) are recovered in an automatic but task-dependent way during reading. Third, thanks to a functional magnetic resonance imaging experiment, I show a representational shift along the ventral visual path: from perceptual features, preferentially encoded in primary visual areas, to conceptual ones, preferentially encoded in mid and anterior temporal areas. This result indicates that complementary dimensions of the semantic space are encoded in a distributed yet partially dissociated way across the cortex. Fourth, by means of a study conducted with magnetoencephalography, I present evidence of an early (around 200 ms after stimulus onset) simultaneous access to both motor-perceptual and conceptual dimensions of the semantic space thanks to different aspects of the signal: inter-trial phase coherence appears to be key for the encoding of perceptual while spectral power changes appear to support encoding of conceptual dimensions. These observations suggest that the neural substrates of different components of symbol meaning can be dissociated in terms of localization and of the feature of the signal encoding them, while sharing a similar temporal evolution

    24th International Conference on Information Modelling and Knowledge Bases

    Get PDF
    In the last three decades information modelling and knowledge bases have become essentially important subjects not only in academic communities related to information systems and computer science but also in the business area where information technology is applied. The series of European – Japanese Conference on Information Modelling and Knowledge Bases (EJC) originally started as a co-operation initiative between Japan and Finland in 1982. The practical operations were then organised by professor Ohsuga in Japan and professors Hannu Kangassalo and Hannu Jaakkola in Finland (Nordic countries). Geographical scope has expanded to cover Europe and also other countries. Workshop characteristic - discussion, enough time for presentations and limited number of participants (50) / papers (30) - is typical for the conference. Suggested topics include, but are not limited to: 1. Conceptual modelling: Modelling and specification languages; Domain-specific conceptual modelling; Concepts, concept theories and ontologies; Conceptual modelling of large and heterogeneous systems; Conceptual modelling of spatial, temporal and biological data; Methods for developing, validating and communicating conceptual models. 2. Knowledge and information modelling and discovery: Knowledge discovery, knowledge representation and knowledge management; Advanced data mining and analysis methods; Conceptions of knowledge and information; Modelling information requirements; Intelligent information systems; Information recognition and information modelling. 3. Linguistic modelling: Models of HCI; Information delivery to users; Intelligent informal querying; Linguistic foundation of information and knowledge; Fuzzy linguistic models; Philosophical and linguistic foundations of conceptual models. 4. Cross-cultural communication and social computing: Cross-cultural support systems; Integration, evolution and migration of systems; Collaborative societies; Multicultural web-based software systems; Intercultural collaboration and support systems; Social computing, behavioral modeling and prediction. 5. Environmental modelling and engineering: Environmental information systems (architecture); Spatial, temporal and observational information systems; Large-scale environmental systems; Collaborative knowledge base systems; Agent concepts and conceptualisation; Hazard prediction, prevention and steering systems. 6. Multimedia data modelling and systems: Modelling multimedia information and knowledge; Contentbased multimedia data management; Content-based multimedia retrieval; Privacy and context enhancing technologies; Semantics and pragmatics of multimedia data; Metadata for multimedia information systems. Overall we received 56 submissions. After careful evaluation, 16 papers have been selected as long paper, 17 papers as short papers, 5 papers as position papers, and 3 papers for presentation of perspective challenges. We thank all colleagues for their support of this issue of the EJC conference, especially the program committee, the organising committee, and the programme coordination team. The long and the short papers presented in the conference are revised after the conference and published in the Series of “Frontiers in Artificial Intelligence” by IOS Press (Amsterdam). The books “Information Modelling and Knowledge Bases” are edited by the Editing Committee of the conference. We believe that the conference will be productive and fruitful in the advance of research and application of information modelling and knowledge bases. Bernhard Thalheim Hannu Jaakkola Yasushi Kiyok

    Multivariate Analysis in Management, Engineering and the Sciences

    Get PDF
    Recently statistical knowledge has become an important requirement and occupies a prominent position in the exercise of various professions. In the real world, the processes have a large volume of data and are naturally multivariate and as such, require a proper treatment. For these conditions it is difficult or practically impossible to use methods of univariate statistics. The wide application of multivariate techniques and the need to spread them more fully in the academic and the business justify the creation of this book. The objective is to demonstrate interdisciplinary applications to identify patterns, trends, association sand dependencies, in the areas of Management, Engineering and Sciences. The book is addressed to both practicing professionals and researchers in the field
    corecore