219 research outputs found

    Construction and Use Examples of Private Electronic Notary Service in Educational Institutions

    Get PDF
    People have many documents. For example, a variety of documents are prepared and used in public institutions. As the internet becomes widely available in recent years, paper documents are being replaced with electronic data, which are often distributed in the form of electronic data without being printed out. Similarly, in educational institutions, increasing number of documents are distributed in the form of electronic data. Such data are distributed through various routes and means, and prone to the risk of alteration in the process. Data may be protected against alteration, but it is difficult to completely prevent data alteration in the distribution process. Data can be generated with electronic signature that allows for the identification of data creator and possible alterations by third parties. This method is, however, not valid if the data becomes separated from the electronic signature, making the validation of data creator or data alterations difficult or impossible. In this paper, we describe the invention of a system that, even in cases where data is separated form the electronic signature, enables easy identification of possible data alterations by the electronic signature management. And we describe here an exploratory construction of private electronic notary service in university. We also add a review on the utilization method of private electronic notary service in universities

    Construction and Use Examples of Private Electronic Notary Service in Educational Institutions

    Get PDF
    People have many documents. For example, a variety of documents are prepared and used in public institutions. As the internet becomes widely available in recent years, paper documents are being replaced with electronic data, which are often distributed in the form of electronic data without being printed out. Similarly, in educational institutions, increasing number of documents are distributed in the form of electronic data. Such data are distributed through various routes and means, and prone to the risk of alteration in the process. Data may be protected against alteration, but it is difficult to completely prevent data alteration in the distribution process. Data can be generated with electronic signature that allows for the identification of data creator and possible alterations by third parties. This method is, however, not valid if the data becomes separated from the electronic signature, making the validation of data creator or data alterations difficult or impossible. In this paper, we describe the invention of a system that, even in cases where data is separated form the electronic signature, enables easy identification of possible data alterations by the electronic signature management. And we describe here an exploratory construction of private electronic notary service in university. We also add a review on the utilization method of private electronic notary service in universities

    An analysis of connectivity

    Get PDF
    Recent evidence in biology indicates crossmodal, which is to say information sharing between the different senses, influences in the brain. This helps to explain such phenomenon as the McGurk effect, where even though a person knows that he is seeing the lip movement “GA” and is hearing the sound “BA”, the person usually can’t help but think that they are hearing the sound “DA”. The McGurk effect is an example of where the visual sense influences the perception of the audio sense. These discoveries transition old feedforward models of the brain to ones that rely on feedback connections and, more recently, crossmodal connections. Although we have many software systems that rely on some form of intelligence, i.e. person recognition software, speech to text software, etc, very few take advantage of crossmodal influences. This thesis provides an analysis of the importance of connections between explicit modalities in a recurrent neural network model. Each modality is represented as an individual recurrent neural network. The connections between the modalities and the modalities themselves are trained by applying a genetic algorithm to generate a population of the full model to solve certain types of classification problems. The main contribution of this work is to experimentally show the relative importance of feedback and crossmodal connections. From this it can be argued that the utilization of crossmodal information at an earlier stage of decision making can boost the accuracy and reliability of intelligent systems

    A novel approach for multimodal graph dimensionality reduction

    No full text
    This thesis deals with the problem of multimodal dimensionality reduction (DR), which arises when the input objects, to be mapped on a low-dimensional space, consist of multiple vectorial representations, instead of a single one. Herein, the problem is addressed in two alternative manners. One is based on the traditional notion of modality fusion, but using a novel approach to determine the fusion weights. In order to optimally fuse the modalities, the known graph embedding DR framework is extended to multiple modalities by considering a weighted sum of the involved affinity matrices. The weights of the sum are automatically calculated by minimizing an introduced notion of inconsistency of the resulting multimodal affinity matrix. The other manner for dealing with the problem is an approach to consider all modalities simultaneously, without fusing them, which has the advantage of minimal information loss due to fusion. In order to avoid fusion, the problem is viewed as a multi-objective optimization problem. The multiple objective functions are defined based on graph representations of the data, so that their individual minimization leads to dimensionality reduction for each modality separately. The aim is to combine the multiple modalities without the need to assign importance weights to them, or at least postpone such an assignment as a last step. The proposed approaches were experimentally tested in mapping multimedia data on low-dimensional spaces for purposes of visualization, classification and clustering. The no-fusion approach, namely Multi-objective DR, was able to discover mappings revealing the structure of all modalities simultaneously, which cannot be discovered by weight-based fusion methods. However, it results in a set of optimal trade-offs, from which one needs to be selected, which is not trivial. The optimal-fusion approach, namely Multimodal Graph Embedding DR, is able to easily extend unimodal DR methods to multiple modalities, but depends on the limitations of the unimodal DR method used. Both the no-fusion and the optimal-fusion approaches were compared to state-of-the-art multimodal dimensionality reduction methods and the comparison showed performance improvement in visualization, classification and clustering tasks. The proposed approaches were also evaluated for different types of problems and data, in two diverse application fields, a visual-accessibility-enhanced search engine and a visualization tool for mobile network security data. The results verified their applicability in different domains and suggested promising directions for future advancements.Open Acces

    Machine Learning in Aerodynamic Shape Optimization

    Get PDF
    Machine learning (ML) has been increasingly used to aid aerodynamic shape optimization (ASO), thanks to the availability of aerodynamic data and continued developments in deep learning. We review the applications of ML in ASO to date and provide a perspective on the state-of-the-art and future directions. We first introduce conventional ASO and current challenges. Next, we introduce ML fundamentals and detail ML algorithms that have been successful in ASO. Then, we review ML applications to ASO addressing three aspects: compact geometric design space, fast aerodynamic analysis, and efficient optimization architecture. In addition to providing a comprehensive summary of the research, we comment on the practicality and effectiveness of the developed methods. We show how cutting-edge ML approaches can benefit ASO and address challenging demands, such as interactive design optimization. Practical large-scale design optimizations remain a challenge because of the high cost of ML training. Further research on coupling ML model construction with prior experience and knowledge, such as physics-informed ML, is recommended to solve large-scale ASO problems

    Adapting heterogeneous ensembles with particle swarm optimization for video face recognition

    Get PDF
    In video-based face recognition applications, matching is typically performed by comparing query samples against biometric models (i.e., an individual’s facial model) that is designed with reference samples captured during an enrollment process. Although statistical and neural pattern classifiers may represent a flexible solution to this kind of problem, their performance depends heavily on the availability of representative reference data. With operators involved in the data acquisition process, collection and analysis of reference data is often expensive and time consuming. However, although a limited amount of data is initially available during enrollment, new reference data may be acquired and labeled by an operator over time. Still, due to a limited control over changing operational conditions and personal physiology, classification systems used for video-based face recognition are confronted to complex and changing pattern recognition environments. This thesis concerns adaptive multiclassifier systems (AMCSs) for incremental learning of new data during enrollment and update of biometric models. To avoid knowledge (facial models) corruption over time, the proposed AMCS uses a supervised incremental learning strategy based on dynamic particle swarm optimization (DPSO) to evolve a swarm of fuzzy ARTMAP (FAM) neural networks in response to new data. As each particle in a FAM hyperparameter search space corresponds to a FAM network, the learning strategy adapts learning dynamics by co-optimizing all their parameters – hyperparameters, weights, and architecture – in order to maximize accuracy, while minimizing computational cost and memory resources. To achieve this, the relationship between the classification and optimization environments is studied and characterized, leading to these additional contributions. An initial version of this DPSO-based incremental learning strategy was applied to an adaptive classification system (ACS), where the accuracy of a single FAM neural network is maximized. It is shown that the original definition of a classification system capable of supervised incremental learning must be reconsidered in two ways. Not only must a classifier’s learning dynamics be adapted to maintain a high level of performance through time, but some previously acquired learning validation data must also be used during adaptation. It is empirically shown that adapting a FAM during incremental learning constitutes a type III dynamic optimization problem in the search space, where the local optima values and their corresponding position change in time. Results also illustrate the necessity of a long term memory (LTM) to store previously acquired data for unbiased validation and performance estimation. The DPSO-based incremental learning strategy was then modified to evolve the swarm (or pool) of FAM networks within an AMCS. A key element for the success of ensembles is tackled: classifier diversity. With several correlation and diversity indicators, it is shown that genoVIII type (i.e., hyperparameters) diversity in the optimization environment is correlated with classifier diversity in the classification environment. Following this result, properties of a DPSO algorithm that seeks to maintain genotype particle diversity to detect and follow local optima are exploited to generate and evolve diversified pools of FAMclassifiers. Furthermore, a greedy search algorithm is presented to perform an efficient ensemble selection based on accuracy and genotype diversity. This search algorithm allows for diversified ensembles without evaluating costly classifier diversity indicators, and selected ensembles also yield accuracy comparable to that of reference ensemble-based and batch learning techniques, with only a fraction of the resources. Finally, after studying the relationship between the classification environment and the search space, the objective space of the optimization environment is also considered. An aggregated dynamical niching particle swarm optimization (ADNPSO) algorithm is presented to guide the FAM networks according two objectives: FAM accuracy and computational cost. Instead of purely solving a multi-objective optimization problem to provide a Pareto-optimal front, the ADNPSO algorithm aims to generate pools of classifiers among which both genotype and phenotype (i.e., objectives) diversity are maximized. ADNPSO thus uses information in the search spaces to guide particles towards different local Pareto-optimal fronts in the objective space. A specialized archive is then used to categorize solutions according to FAMnetwork size and then capture locally non-dominated classifiers. These two components are then integrated to the AMCS through an ADNPSO-based incremental learning strategy. The AMCSs proposed in this thesis are promising since they create ensembles of classifiers designed with the ADNPSO-based incremental learning strategy and provide a high level of accuracy that is statistically comparable to that obtained through mono-objective optimization and reference batch learning techniques, and yet requires a fraction of the computational cost

    Computational intelligence approaches to robotics, automation, and control [Volume guest editors]

    Get PDF
    No abstract available

    k-Means

    Get PDF
    • …
    corecore