176 research outputs found

    Génération d'étiquettes énergétiques pour les bâtiments

    Get PDF
    Ce travail de bachelor fournit aux logiciels web de surveillance énergétique des bâtiments deux outils supplémentaires permettant de visualiser et comparer les consommations d'énergie. Le premier outil fournit un service d'affichage des consommations sous forme de graphiques interactifs. Le second fournit un service permettant la représentation des consommations sous forme d'étiquettes énergétiques avec une échelle d'appréciation de A à G. Ces deux services web pourront être implémentés par le logiciel de surveillance énergétique SBat3 qui est utilisé par le canton du Valais et également par un projet CTI en cours de réalisation à l'institut IIG. Le logiciel client peut interroger les deux modules via l'adresse du serveur PHP où sont implémentés les deux services. L'adresse interrogée est accompagnée d'un paramètre correspondant à l'adresse du fichier XML contenant les données énergétiques à traiter. Cette solution permet donc aux modules de fonctionner indépendamment de la technologie dans laquelle est implémenté le logiciel client. Dans le cadre du module de génération de graphiques, le client reçoit une réponse sous forme de code HTML et JavaScript représentant le graphique. Les graphiques sont générés à l'aide de la librairie JavaScript Highcharts. Dans le cadre du module de génération d'étiquettes énergétiques, le client reçoit en réponse une image de type png ou un document PDF représentant l'étiquette énergétique. Les fichiers réponses sont générés par l'intermédiaire des utilitaires wkhtmltopdf et wkhtmltoimage. Ce projet a été réalisé dans le cadre d’un travail de bachelor HES en informatique de gestion et a été effectué du 17 mai au 16 août 2010, pour un total de 364 heures

    Prediction of unsupported excavations behaviour with machine learning techniques

    Get PDF
    Artificial intelligence and machine learning algorithms have known an increasing interest from the research community, triggering new applications and services in many domains. In geotechnical engineering, for instance, neural networks have been used to benefit from information gained at a given site in order to extract relevant constitutive soil information from field measurements [1]. The goal of this work is to use machine (supervised) learning techniques in order to predict the behaviour of a sheet pile wall excavation, minimizing a loss function that maps the input (excavation’s depth, soil’s characteristics, wall’s stiffness) to a predicted output (wall’s deflection, soil’s settlement, wall’s bending moment). Neural networks are used to do this supervised learning. A neural network is composed of neurons which apply a mathematical function on their input (see Figure 1, left) and synapses which take the output of one neuron to the input of another one. For our purpose, neural networks can be understood as a set of nonlinear functions which can be fitted to data by changing their parameters. In this work, a simple class of neural networks, called Multi-Layer Perceptron (MLP) are used. They are composed of an input layer of neurons, an output layer, and one or several middle layers (hidden layers) (see Figure 1, right). A neural network learns by adjusting the weights and biases in order to minimize a certain loss function (for instance: the mean squared error) between the desired and the predicted output. Stochastic gradient descent or one of its variations are used to adjust the parameters and the gradients are obtained through backpropagation (an efficient application of the chain rule). The interest in neural networks comes from the fact that they are universal function estimators, in the sense that they can approximate any continuous function to any precision given enough neurons. However, this can lead to over-fitting problems where the network learns the noise in the data, or worse, where they memorize by rote each sample [2]

    Computer-Aided Specification, Evaluation and Monitoring of Information Systems

    Get PDF
    One of the crucial issues in specifying requirements for an In formation System is to guarantee the effectiveness and the efficiency of their future implementation. The objectives of the methodology, proposed in the IDA project and presented in this paper, have been: 1. To propose a model and an associated language for a more rigourous specification of Information Systems, 2. To develop among others two complementary software tools allowing the experimental evaluation of this specification, by prototyping and simulation, before to implement it · and, more recently, 3. To outline an automated monitoring of the implemented Information System

    Labeled Images Verification Using Gaussian Mixture Models

    Get PDF
    ABSTRACT We are proposing in this paper an automated system to verify that images are correctly associated to labels. The novelty of the system is in the use of Gaussian Mixture Models (GMMs) as statistical modeling scheme as well as in several improvements introduced specifically for the verification task. Our approach is evaluated using the Caltech 101 database. Starting from an initial baseline system providing an equal error rate of 27.4%, we show that the rate of errors can be reduced down to 13% by introducing several optimizations of the system. The advantage of the approach lies in the fact that basically any object can be generically and blindly modeled with limited supervision. A potential target application could be a post-filtering of images returned by search engines to prune out or reorder less relevant images

    Structural network properties of niche-overlap graphs

    Get PDF
    The structure of networks has always been interesting for researchers. Investigating their unique architecture allows to capture insights and to understand the function and evolution of these complex systems. Ecological networks such as food-webs and niche-overlap graphs are considered as complex systems. The main purpose of this work is to compare the topology of 15 real niche-overlap graphs with random ones. Five measures are treated in this study: (1) the clustering coefficient, (2) the between ness centrality, (3) the assortativity coefficient, (4) the modularity and (5) the number of chord less cycles. Significant differences between real and random networks are observed. Firstly, we show that niche-overlap graphs display a higher clustering and a higher modularity compared to random networks. Moreover we find that random networks have barely nodes that belong to a unique sub graph (i.e. between ness centrality equal to 0) and highlight the presence of a small number of chord less cycles compared to real networks. These analyses may provide new insights in the structure of these real niche-overlap graphs and may give important implications on the functional organization of species competing for some resources and on the dynamics of these systems

    A language-independent, openvocabulary system based on HMMs for recognition of ultra low resolution words

    Get PDF
    ABSTRACT In this paper, we introduce and evaluate a system capable of recognizing ultra low resolution words extracted from images such as those frequently embedded on web pages. The design of the system has been driven by the following constraints. First, the system has to recognize small font sizes where antialiasing and resampling procedures have been applied. Such procedures add noise on the patterns and complicate any a priori segmentation of the characters. Second, the system has to be able to recognize any words in an open vocabulary setting, potentially mixing different languages. Finally, the training procedure must be automatic, i.e. without requesting to extract, segment and label manually a large set of data. These constraints led us to an architecture based on ergodic HMMs where states are associated to the characters. We also introduce several improvements of the performance increasing the order of the emission probability estimators and including minimum and maximum duration constraints on the character models. The proposed system is evaluated on different font sizes and families, showing good robustness for sizes down to 6 points

    A language-independent, openvocabulary system based on HMMs for recognition of ultra low resolution words

    Get PDF
    Abstract: In this paper, we introduce and evaluate a system capable of recognizing words extracted from ultra low resolution images such as those frequently embedded on web pages. The design of the system has been driven by the following constraints. First, the system has to recognize small font sizes between 6-12 points where anti-aliasing and resampling filters are applied. Such procedures add noise between adjacent characters in the words and complicate any a priori segmentation of the characters. Second, the system has to be able to recognize any words in an open vocabulary setting, potentially mixing different languages in Latin alphabet. Finally, the training procedure must be automatic, i.e. without requesting to extract, segment and label manually a large set of data. These constraints led us to an architecture based on ergodic HMMs where states are associated to the characters. We also introduce several improvements of the performance increasing the order of the emission probability estimators, including minimum and maximum width constraints on the character models and a training set consisting all possible adjacency cases of Latin characters. The proposed system is evaluated on different font sizes and families, showing good robustness for sizes down to 6 points
    • …
    corecore