14 research outputs found

    Impact of an Energy Efficiency Regulation in Northern Canada

    Get PDF
    Extreme cold climates and Canada’s sparsely populated Northern regions have limited human and infrastructural capacity making it difficult to build energy-efficient homes. Despite such differences, homes are built based on codes and standards developed for Canada’s South. In 2008, a by-law was passed in Yellowknife, Canada requiring a minimum EnerGuide Housing (EGH) rating of 80 for all new single-family and two-family residential buildings. The EnerGuide’s Energy Rating Service (ERS) program is an energy assessment program for residential housing formerly known as the EnerGuide Rating for Houses (EGH). Homes are rated between 0 to 100; lower numbers represent homes that are less efficient and 100 represents an airtight and well-insulated house that is net-zero energy. 1002 homes from the City of Yellowknife evaluated since 1950s were studied from the ERS database, Performance metrics studied include energy intensity, EGH rating, ACH rating, window types, the thermal resistance of the building envelope, primary heating and hot water heating equipment’s efficiencies, total electricity used, and total energy used. The analysis identified the current state of housing in Yellowknife, past and present housing trends, and determined the effect of the city of Yellowknife’s new building by-law had on housing performance. The preliminary finding shows a pathway to significantly improve the energy efficiency of the housing stock in Yellowknife. This regulation shows other municipalities in Canada that legislations pushing energy efficient buildings can be very effective

    An Efficient Score level Multimodal Biometric System using ECG and Fingerprint

    Get PDF
    Biometric system is a security system that uses human’s unique traits to identify and authenticate the user. Biometrics refers to biological traits of a human that are often categorized as physiological traits like fingerprint, iris, face and behavioral characteristics like signature style, voice and typing rhythm. The Biological signals like Electrocardiography (ECG), Electromyography(EMG), and Electroencephalography (EEG) have not been explored to biometric applications as their scope was limited to medical applications only. Recent survey suggests that these biological signals can be explored as a part of the biometric application. The main objective of this paper is to explore the possibility of using the ECG as a part of multimodal biometric. ECG has lower accuracy but fusing it with a traditional biometric like fingerprint yields a higher accuracy rate and it is really difficult to spoof the system. The proposed multimodal biometrics system has an accuracy of 98% with the false acceptance rate of 2% and almost 0% of false rejection rate

    A Comprehensive Survey of Automatic Dysarthric Speech Recognition

    Get PDF
    Automatic dysarthric speech recognition (DSR) is very crucial for many human computer interaction systems that enables the human to interact with machine in natural way. The objective of this paper is to analyze the literature survey of various Machine learning (ML) and deep learning (DL) based dysarthric speech recognition systems (DSR). This article presents a comprehensive survey of the recent advances in the automatic Dysarthric Speech Recognition (DSR) using machine learning and deep learning paradigms. It focuses on the methodology, database, evaluation metrics and major findings from the study of previous approaches.The proposed survey presents the various challenges related with DSR such as individual variability, limited training data, contextual understanding, articulation variability, vocal quality changes, and speaking rate variations.From the literature survey it provides the gaps between exiting work and previous work on DSR and provides the future direction for improvement of DSR.&nbsp

    A Unified and Efficient Coordinating Framework for Autonomous DBMS Tuning

    Full text link
    Recently using machine learning (ML) based techniques to optimize modern database management systems has attracted intensive interest from both industry and academia. With an objective to tune a specific component of a DBMS (e.g., index selection, knobs tuning), the ML-based tuning agents have shown to be able to find better configurations than experienced database administrators. However, one critical yet challenging question remains unexplored -- how to make those ML-based tuning agents work collaboratively. Existing methods do not consider the dependencies among the multiple agents, and the model used by each agent only studies the effect of changing the configurations in a single component. To tune different components for DBMS, a coordinating mechanism is needed to make the multiple agents cognizant of each other. Also, we need to decide how to allocate the limited tuning budget among the agents to maximize the performance. Such a decision is difficult to make since the distribution of the reward for each agent is unknown and non-stationary. In this paper, we study the above question and present a unified coordinating framework to efficiently utilize existing ML-based agents. First, we propose a message propagation protocol that specifies the collaboration behaviors for agents and encapsulates the global tuning messages in each agent's model. Second, we combine Thompson Sampling, a well-studied reinforcement learning algorithm with a memory buffer so that our framework can allocate budget judiciously in a non-stationary environment. Our framework defines the interfaces adapted to a broad class of ML-based tuning agents, yet simple enough for integration with existing implementations and future extensions. We show that it can effectively utilize different ML-based agents and find better configurations with 1.4~14.1X speedups on the workload execution time compared with baselines.Comment: Accepted at 2023 International Conference on Management of Data (SIGMOD '23

    Obtención, análisis y visualización de series temporales mixtas de datos estructurados y no estructurados obtenidos de Twitter.

    Get PDF
    El mercado de activos y criptomonedas ha crecido significativamente en los últimos años al igual que el número de jóvenes interesados en invertir y ganar dinero "fácil". Este aumento, sin embargo, no se ha acompañado de un incremento en formación, resultando en un gran número de errores evitables y la pérdida de grandes cantidades de dinero. Una metodología comúnmente utilizada para minimizar dichos riesgos consiste en consultar las redes sociales en busca de expertos que orienten o de identificar tendencias. El uso de la inteligencia artificial puede ser muy beneficioso para automatizar los pasos seguidos en esta metodología, en combinación con métricas como la cantidad de menciones o el sentimiento asociado a los comentarios de las redes sociales. En este proyecto se pretende desarrollar una aplicación capaz de obtener y procesar datos de redes sociales en tiempo real con la que fomentar la inversión consciente y no especulativa, valorando también el rendimiento de la base de datos utilizada y creando una página web donde los usuarios puedan consultar las distintas métricas generadas.The asset market has grown significantly in the last few years as well as the number of young people interested in investing and making "easy" money. This increase in demand, however, has not been met with an increase in formation or education, resulting in lots of avoidable mistakes and the loss of considerable amounts of money. A common approach on minimizing losses consists of using social media to look for the guidance of specialists or try to identify the latest trends. The use of artificial intelligence can be highly beneficial to automate the steps followed by this approach along with the use of metrics such as the number of mentions or the sentiment of the comments. The purpose of this project is to develop an app capable of obtaining and processing data from social media in real time, automating the process of identifying trends and promoting conscious investment against speculation. Besides the AI engine behind the app, this project also focuses on the efficiency and performance of the chosen database and the implementation of a webpage where users can see the processed data and generated metrics

    An evaluation of the performance of a NoSQL document database in a simulation of a large scale Electronic Health Record (EHR) system

    Get PDF
    Electronic Healthcare Record (EHR) systems can provide significant benefits by improving the effectiveness of healthcare systems. Research and industry projects focusing on storing healthcare information in NoSQL databases has been triggered by practical experience demonstrating that a relational database approach to managing healthcare records has become a bottleneck. Previous studies show that NoSQL databases based on consistency, availability and partition tolerance (CAP) theorem have significant advantages over relational databases such as easy and automatic scaling, better performance and high availability. However, there is limited empirical research that has evaluated the suitability of NoSQL databases for managing EHRs. This research addressed this identified research problem and gap in the literature by investigating the following general research: How can a simulation of a large EHR system be developed so that the performance of NoSQL document databases comparative to relational databases can be evaluated? Using a Design Science approach informed by a pragmatic worldview, a number of IT artefacts were developed to enable an evaluation of performance of a NoSQL document oriented database comparative to a relational database in a simulation of a large scale EHR system. These were healthcare data models (NoSQL document database, relational database) for the Australian Healthcare context, a random healthcare data generator and a prototype EHR system. The performance of a NoSQL document database (Couchbase) was evaluated comparative to a relational database (MySQL) in terms database operations (insert, update, delete of EHRs), scalability, EHR sharing and data analysis (complex querying) capabilities in a simulation of a large scale EHR system, constructed in the cloud environment of Amazon Web Services (AWS). Test scenarios consisted of a number of different configurations ranging from 1, 2, 4, 8 and 16 nodes for 1Million, 10 Million, 100 Million and 500 Million records to simulate database operations in a large scale and distributed EHR system environment. The Couchbase NoSQL document database was found to perform significantly better than the MySQL relational database in most of the test cases in terms of database operations -insert, update, delete of EHRs, scalability and EHR sharing. However, the MySQL relational database was found to perform significantly better than the Couchbase NoSQL document database for the complex query test that demonstrates basic analysis capabilities. Furthermore, the Couchbase NoSQL document database used significantly more disk space than the MySQL relational database to store the same number of EHRs. This research made a number of important contributions to knowledge, theory and practice. The main theoretical contribution to design theory was the design and evaluation of a prototype EHR system for simulating database management operations in a large scale EHR system environment. The prototype EHR system was underpinned by the development of two data models with data structures designed for a NoSQL document database and a relational database and a random healthcare data generator which were based on Australian Healthcare data characteristics and statistics. The design of a data model for EHRs for a NoSQL document database using an aggregated document modelling approach provided an important contribution to data modelling theory for NoSQL document databases using de-normalisation and document aggregation. The design of a random healthcare data generator was another important contribution to design theory and was based on a data distribution algorithm (multinomial distribution and probability theory) informed by National Health Data Dictionary and published Australian Healthcare statistics. The prototype EHR system allowed this study to demonstrate through a simulated performance evaluation that a NoSQL document database has significant and proven performance advantages over relational databases in most of the database management test cases. Hence this study demonstrated the utility and efficacy of a NoSQL document database in the simulation of a large scale EHR system. This research has made a number of important contributions to practice. Foremost is that the IT artefacts (namely, a data model for storing EHRs in a NoSQL document database, a random healthcare data generator and a prototype EHR system) developed and evaluated in this research can be readily adopted by practitioners. Another important practical contribution of this research is that it is based on the open source availability of NoSQL database and relational database alternatives. Hence, this research can provide a sound basis for lower-income countries as well higher-income countries to establish their own cost-effective national EHR systems without the restrictions, limitations, complexity or complications of similar proprietary relational database systems

    Forensic dna databasing: retention regimes and efficacy

    Get PDF
    Three legislative regimes have governed the England and Wales National DNA Database (NDNAD). These are broadly described as restrictive (1995 – 2001), expansive (2001 – 2013) and semi-restrictive/Protection of Freedoms Act 2012 (PoFA) regimes (2013 – present). The actual effectiveness of the three regimes remains abstruse. This research aimed to assess the efficacy of the different regimes to advance any reforms that may maximise the utility of the database and enhance the protection of public security and the individual’s right to privacy. The research focused on the societal and individual interest outcomes of DNA databasing. The methodology involved a document analysis of reports of oversight bodies, contributing to the establishment of the benefits, challenges and risks of the current regime. Secondly, a literature review of research into DNA databasing was conducted. Thisidentified key effectiveness indicators for the assessment of NDNAD regimes. A self-administered semi-structured questionnaire was used to assess the perception of the public about the statutory functions and ethical implications of the NDNAD. The questionnaire also asked about views on the most appropriate inclusion and retention criteria for the database. Lastly, a stakeholder survey was conducted to determine the views of experts on the efficacy of the NDNAD regimes against the effectiveness indicators. Overall, a majority of the 201 participants who answered the public survey perceived the NDNAD to be effective in detecting, investigating and prosecuting crime. The participants were sceptical about the ability of the NDNAD to prevent crime. This suggests a reform of the statutory purpose of DNA retention to represent actual outcomes. Most participants favoured the inclusion and retention of DNA data from arrested, charged or convicted individuals. A selective regime based on offence seriousness was preferred by participants for the retention of DNA data from convicted adults. This indicates a reform of the current blanket rule which allows indefinite retention. The surveyed expert group (n = 31, mainly law enforcement officers) perceived the expansive regime to be the most effective for public security, implementation cost and efficiency reasons. The findings imply discrepancies with the current law governing the NDNAD. Whilst participants of the public survey support further restrictions to the PoFA regime, the expert group favoured the expansive regime. The survey evidence suggests a need for a statutory requirement to generate systematic data about the actual effectiveness of the NDNAD. Further, a consultation scheme should be established to account for the acceptability of the NDNAD regime among a representative sample of the public. These reforms will help improve the legitimacy of the law and ensure a balanced approach in ‘shaping’ the proportionality of the NDNAD regime
    corecore