40 research outputs found
âAn Artificial Intelligence Framework for Supporting Coarse-Grained Workload Classification in Complex Virtual Environments
Cloud-based machine learning tools for enhanced Big Data applications}â, âwhere the main idea is that of predicting the ``\emph{next}'' \emph{workload} occurring against the target Cloud infrastructure via an innovative \emph{ensemble-based approach} that combines the effectiveness of different well-known \emph{classifiers} in order to enhance the whole accuracy of the final classificationâ, âwhich is very relevant at now in the specific context of \emph{Big Data}â. âThe so-called \emph{workload categorization problem} plays a critical role in improving the efficiency and reliability of Cloud-based big data applicationsâ. âImplementation-wiseâ, âour method proposes deploying Cloud entities that participate in the distributed classification approach on top of \emph{virtual machines}â, âwhich represent classical ``commodity'' settings for Cloud-based big data applicationsâ. âGiven a number of known reference workloadsâ, âand an unknown workloadâ, âin this paper we deal with the problem of finding the reference workload which is most similar to the unknown oneâ. âThe depicted scenario turns out to be useful in a plethora of modern information system applicationsâ. âWe name this problem as \emph{coarse-grained workload classification}â, âbecauseâ, âinstead of characterizing the unknown workload in terms of finer behaviorsâ, âsuch as CPUâ, âmemoryâ, âdiskâ, âor network intensive patternsâ, âwe classify the whole unknown workload as one of the (possible) reference workloadsâ. âReference workloads represent a category of workloads that are relevant in a given applicative environmentâ. âIn particularâ, âwe focus our attention on the classification problem described above in the special case represented by \emph{virtualized environments}â. âTodayâ, â\emph{Virtual Machines} (VMs) have become very popular because they offer important advantages to modern computing environments such as cloud computing or server farmsâ. âIn virtualization frameworksâ, âworkload classification is very useful for accountingâ, âsecurity reasonsâ, âor user profilingâ. âHenceâ, âour research makes more sense in such environmentsâ, âand it turns out to be very useful in a special context like Cloud Computingâ, âwhich is emerging nowâ. âIn this respectâ, âour approach consists of running several machine learning-based classifiers of different workload modelsâ, âand then deriving the best classifier produced by the \emph{Dempster-Shafer Fusion}â, âin order to magnify the accuracy of the final classificationâ. âExperimental assessment and analysis clearly confirm the benefits derived from our classification frameworkâ. âThe running programs which produce unknown workloads to be classified are treated in a similar wayâ. âA fundamental aspect of this paper concerns the successful use of data fusion in workload classificationâ. âDifferent types of metrics are in fact fused together using the Dempster-Shafer theory of evidence combinationâ, âgiving a classification accuracy of slightly less than â. âThe acquisition of data from the running processâ, âthe pre-processing algorithmsâ, âand the workload classification are described in detailâ. âVarious classical algorithms have been used for classification to classify the workloadsâ, âand the results are comparedâ
Intensional Cyberforensics
This work focuses on the application of intensional logic to cyberforensic
analysis and its benefits and difficulties are compared with the
finite-state-automata approach. This work extends the use of the intensional
programming paradigm to the modeling and implementation of a cyberforensics
investigation process with backtracing of event reconstruction, in which
evidence is modeled by multidimensional hierarchical contexts, and proofs or
disproofs of claims are undertaken in an eductive manner of evaluation. This
approach is a practical, context-aware improvement over the finite state
automata (FSA) approach we have seen in previous work. As a base implementation
language model, we use in this approach a new dialect of the Lucid programming
language, called Forensic Lucid, and we focus on defining hierarchical contexts
based on intensional logic for the distributed evaluation of cyberforensic
expressions. We also augment the work with credibility factors surrounding
digital evidence and witness accounts, which have not been previously modeled.
The Forensic Lucid programming language, used for this intensional
cyberforensic analysis, formally presented through its syntax and operational
semantics. In large part, the language is based on its predecessor and
codecessor Lucid dialects, such as GIPL, Indexical Lucid, Lucx, Objective
Lucid, and JOOIP bound by the underlying intensional programming paradigm.Comment: 412 pages, 94 figures, 18 tables, 19 algorithms and listings; PhD
thesis; v2 corrects some typos and refs; also available on Spectrum at
http://spectrum.library.concordia.ca/977460
Intensional Cyberforensics
This work focuses on the application of intensional logic to cyberforensic analysis and its benefits and difficulties are compared with the finite-state-automata approach. This work extends the use of the intensional programming paradigm to the modeling and implementation of a cyberforensics investigation process with backtracing of event reconstruction, in which evidence is modeled by multidimensional hierarchical contexts, and proofs or disproofs of claims are undertaken in an eductive manner of evaluation. This approach is a practical, context-aware improvement over the finite state automata (FSA) approach we have seen in previous work. As a base implementation language model, we use in this approach a new dialect of the Lucid programming language, called Forensic Lucid, and we focus on defining hierarchical contexts based on intensional logic for the distributed evaluation of cyberforensic expressions. We also augment the work with credibility factors surrounding digital evidence and witness accounts, which have not been previously modeled.
The Forensic Lucid programming language, used for this intensional cyberforensic analysis, formally presented through its syntax and operational semantics. In large part, the language is based on its predecessor and codecessor Lucid dialects, such as GIPL, Indexical Lucid, Lucx, Objective Lucid, MARFL, and JOOIP bound by the underlying intensional programming paradigm
Advances and Applications of Dezert-Smarandache Theory (DSmT) for Information Fusion (Collected Works), Vol. 4
The fourth volume on Advances and Applications of Dezert-Smarandache Theory (DSmT) for information fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics. The contributions (see List of Articles published in this book, at the end of the volume) have been published or presented after disseminating the third volume (2009, http://fs.unm.edu/DSmT-book3.pdf) in international conferences, seminars, workshops and journals.
First Part of this book presents the theoretical advancement of DSmT, dealing with Belief functions, conditioning and deconditioning, Analytic Hierarchy Process, Decision Making, Multi-Criteria, evidence theory, combination rule, evidence distance, conflicting belief, sources of evidences with different importance and reliabilities, importance of sources, pignistic probability transformation, Qualitative reasoning under uncertainty, Imprecise belief
structures, 2-Tuple linguistic label, Electre Tri Method, hierarchical proportional redistribution, basic belief assignment, subjective probability measure, Smarandache codification, neutrosophic logic, Evidence theory, outranking methods, Dempster-Shafer Theory, Bayes fusion rule, frequentist probability, mean square error, controlling factor, optimal assignment solution, data association, Transferable Belief Model, and others.
More applications of DSmT have emerged in the past years since the apparition of the third book of DSmT 2009. Subsequently, the second part of this volume is about applications of DSmT in correlation with Electronic Support Measures, belief function, sensor networks, Ground Moving Target and Multiple target tracking, Vehicle-Born Improvised Explosive Device, Belief Interacting Multiple Model filter, seismic and acoustic sensor, Support Vector Machines, Alarm
classification, ability of human visual system, Uncertainty Representation and Reasoning Evaluation Framework, Threat Assessment, Handwritten Signature Verification, Automatic Aircraft Recognition, Dynamic Data-Driven Application System, adjustment of secure communication trust analysis, and so on.
Finally, the third part presents a List of References related with DSmT published or presented along the years since its inception in 2004, chronologically ordered
Coarse-grained workload categorization in virtual environments using the Dempster-Shafer fusion
Given a number of known reference workloads, and an unknown workload, this paper deals with the problem of finding the reference workload which is most similar to the unknown one. T he depicted scenario turns to be useful in a plethora of modern information system applications. We
name this problem as coarse-grained workload classification, because, instead of characterizing the unknown workload in terms of finer behaviors, such as CPU, memory, disk or network intensive patterns, we classify the whole unknown workload as one of the (possible) reference workloads. Reference workloads represent a category of workloads that are relevant in a given applicative environment. In particular, we focus our attention
on the classification problem described above in the special case represented by virtualized environments. Today, Virtual Machines (VMs) have become very popular because they offer important advantages to modern computing environments such as cloud computing or server farms. In virtualization frameworks, workload classification is very useful for accounting, security reasons or user profiling. Hence, our research makes more sense in such environments, and it turns to be very useful in a special context like cloud computing, which is emerging at now. In this respect, our approach consists in running several machine-learning-based classifiers of different workload models, and then deriving the best classifier produced by the Dempster-Shafer fusion, in order to magnify the accuracy of the final classification. Experimental assessment and analysis c1ealry confirm the benefits deriving from our classification framework
Urban Informatics
This open access book is the first to systematically introduce the principles of urban informatics and its application to every aspect of the city that involves its functioning, control, management, and future planning. It introduces new models and tools being developed to understand and implement these technologies that enable cities to function more efficiently â to become âsmartâ and âsustainableâ. The smart city has quickly emerged as computers have become ever smaller to the point where they can be embedded into the very fabric of the city, as well as being central to new ways in which the population can communicate and act. When cities are wired in this way, they have the potential to become sentient and responsive, generating massive streams of âbigâ data in real time as well as providing immense opportunities for extracting new forms of urban data through crowdsourcing. This book offers a comprehensive review of the methods that form the core of urban informatics from various kinds of urban remote sensing to new approaches to machine learning and statistical modelling. It provides a detailed technical introduction to the wide array of tools information scientists need to develop the key urban analytics that are fundamental to learning about the smart city, and it outlines ways in which these tools can be used to inform design and policy so that cities can become more efficient with a greater concern for environment and equity