405 research outputs found

    Improving the Effect of Electric Vehicle Charging on Imbalance Index in the Unbalanced Distribution Network Using Demand Response Considering Data Mining Techniques

    Get PDF
    With the development of electrical network infrastructure and the emergence of concepts such as demand response and using electric vehicles for purposes other than transportation, knowing the behavioral patterns of network technical specifications to manage electrical systems has become very important optimally. One of the critical parameters in the electrical system management is the distribution network imbalance. There are several ways to improve and control network imbalances. One of these ways is to detect the behavior of bus imbalance profiles in the network using data analysis. In the past, data analysis was performed for large environments such as states and countries. However, after the emergence of smart grids, behavioral study and recognition of these patterns in small-scale environments has found a fundamental and essential role in the deep management of these networks. One of the appropriate methods in identifying behavioral patterns is data mining. This paper uses the concepts of hierarchical and k-means clustering methods to identify the behavioral pattern of the imbalance index in an unbalanced distribution network. For this purpose, first, in an unbalanced network without the electric vehicle parking, the imbalance profile for all busses is estimated. Then, by applying the penetration coefficient of 25 and 75 for electric vehicles in the network, charging/discharging effects on the imbalance profile is determined. Then, by determining the target cluster and using demand response, the imbalance index is improved. This method reduces the number of busses competing in demand response programs. Next, using the concept of classification, a decision tree is constructed to minimize metering time

    BagStack Classification for Data Imbalance Problems with Application to Defect Detection and Labeling in Semiconductor Units

    Get PDF
    abstract: Despite the fact that machine learning supports the development of computer vision applications by shortening the development cycle, finding a general learning algorithm that solves a wide range of applications is still bounded by the ”no free lunch theorem”. The search for the right algorithm to solve a specific problem is driven by the problem itself, the data availability and many other requirements. Automated visual inspection (AVI) systems represent a major part of these challenging computer vision applications. They are gaining growing interest in the manufacturing industry to detect defective products and keep these from reaching customers. The process of defect detection and classification in semiconductor units is challenging due to different acceptable variations that the manufacturing process introduces. Other variations are also typically introduced when using optical inspection systems due to changes in lighting conditions and misalignment of the imaged units, which makes the defect detection process more challenging. In this thesis, a BagStack classification framework is proposed, which makes use of stacking and bagging concepts to handle both variance and bias errors. The classifier is designed to handle the data imbalance and overfitting problems by adaptively transforming the multi-class classification problem into multiple binary classification problems, applying a bagging approach to train a set of base learners for each specific problem, adaptively specifying the number of base learners assigned to each problem, adaptively specifying the number of samples to use from each class, applying a novel data-imbalance aware cross-validation technique to generate the meta-data while taking into account the data imbalance problem at the meta-data level and, finally, using a multi-response random forest regression classifier as a meta-classifier. The BagStack classifier makes use of multiple features to solve the defect classification problem. In order to detect defects, a locally adaptive statistical background modeling is proposed. The proposed BagStack classifier outperforms state-of-the-art image classification techniques on our dataset in terms of overall classification accuracy and average per-class classification accuracy. The proposed detection method achieves high performance on the considered dataset in terms of recall and precision.Dissertation/ThesisDoctoral Dissertation Computer Engineering 201

    Towards A Computational Intelligence Framework in Steel Product Quality and Cost Control

    Get PDF
    Steel is a fundamental raw material for all industries. It can be widely used in vari-ous fields, including construction, bridges, ships, containers, medical devices and cars. However, the production process of iron and steel is very perplexing, which consists of four processes: ironmaking, steelmaking, continuous casting and rolling. It is also extremely complicated to control the quality of steel during the full manufacturing pro-cess. Therefore, the quality control of steel is considered as a huge challenge for the whole steel industry. This thesis studies the quality control, taking the case of Nanjing Iron and Steel Group, and then provides new approaches for quality analysis, manage-ment and control of the industry. At present, Nanjing Iron and Steel Group has established a quality management and control system, which oversees many systems involved in the steel manufacturing. It poses a high statistical requirement for business professionals, resulting in a limited use of the system. A lot of data of quality has been collected in each system. At present, all systems mainly pay attention to the processing and analysis of the data after the manufacturing process, and the quality problems of the products are mainly tested by sampling-experimental method. This method cannot detect product quality or predict in advance the hidden quality issues in a timely manner. In the quality control system, the responsibilities and functions of different information systems involved are intricate. Each information system is merely responsible for storing the data of its corresponding functions. Hence, the data in each information system is relatively isolated, forming a data island. The iron and steel production process belongs to the process industry. The data in multiple information systems can be combined to analyze and predict the quality of products in depth and provide an early warning alert. Therefore, it is necessary to introduce new product quality control methods in the steel industry. With the waves of industry 4.0 and intelligent manufacturing, intelligent technology has also been in-troduced in the field of quality control to improve the competitiveness of the iron and steel enterprises in the industry. Applying intelligent technology can generate accurate quality analysis and optimal prediction results based on the data distributed in the fac-tory and determine the online adjustment of the production process. This not only gives rise to the product quality control, but is also beneficial to in the reduction of product costs. Inspired from this, this paper provide in-depth discussion in three chapters: (1) For scrap steel to be used as raw material, how to use artificial intelligence algorithms to evaluate its quality grade is studied in chapter 3; (2) the probability that the longi-tudinal crack occurs on the surface of continuous casting slab is studied in chapter 4;(3) The prediction of mechanical properties of finished steel plate in chapter 5. All these 3 chapters will serve as the technical support of quality control in iron and steel production

    Selección de características usando modelo hibrido basado en algoritmos genéticos

    Get PDF
    En el articulo se propone un modelo hibrido de selección de características con el objeto de reducir la dimensión del espacio de entrenamiento, sin comprometer la precisión de clasificación. El modelo incluye la inducción de un árbol de decisión que genera subconjuntos de características, para las cuales seguidamente se evalúa su relevancia mediante el criterio del mínimo error de clasificación. El procedimiento de evaluación se desarrolla empleando la regla de los k-vecinos más cercanos. Usualmente, la reducción de espacios supone una cota de error de clasificación; sin embargo, en este trabajo la sintonización del modelo hibrido de selección se realiza usando algoritmos genéticos, con lo cual se obtiene de forma simultánea la minimización tanto del número de características de entrenamiento, como del error de clasificación. De manera adicional, a diferencia de las técnicas convencionales de selección, el modelo propuesto permite cuantificar el nivel de relevancia de cada característica perteneciente al conjunto reducido de entrenamiento. Las pruebas del modelo se realizan para la identificación de hipernasalidad, en el caso de voz, y cardiopatía isquémica, en el caso de registros de electrocardiografía. Las bases de datos corresponden a una población de 90 niños (45 registros por clase) y a 100 registros electrocardiográficos (50 por clase). Los resultados obtenidos muestran una efectividad promedio para la reducción del espacio de entrenamiento inicial hasta de un 88%, con una tasa promedio de error de clasificación inferior al 6%.The present work proposes a hybrid feature selection model aimed at reducing training time whilst maintaining classification accuracy. The model includes adlusting a decision tree for producing feature subsets. Such subsets’ statistical relevance was evaluated from their resulting classification error. Evaluation involved using the k-nearest neighbors’ rule. Dimension reduction techniques usually assume an element of error; however, the hybrid selection model was tuned by means of genetic algorithms in this work. They simultaneously minimise the number of fea- tures and training error. Contrasting with conventional methods, this model also led to quantifying the relevance of each training set’s features. The model was tested on speech signals (hypernasality classification) and ECG identification (ischemic cardiopathy). In the case of speech signals, the database consisted of 90 children (45 recordings per sample); the ECG database had 100 electrocardiograph records (50 recordings per sample). Results showed average reduction rates of up to 88%, classification error being less than 6%

    Technology 2002: the Third National Technology Transfer Conference and Exposition, Volume 1

    Get PDF
    The proceedings from the conference are presented. The topics covered include the following: computer technology, advanced manufacturing, materials science, biotechnology, and electronics

    Modern Approaches To Quality Control

    Get PDF
    Rapid advance have been made in the last decade in the quality control procedures and techniques, most of the existing books try to cover specific techniques with all of their details. The aim of this book is to demonstrate quality control processes in a variety of areas, ranging from pharmaceutical and medical fields to construction engineering and data quality. A wide range of techniques and procedures have been covered

    Automatic Pain Assessment by Learning from Multiple Biopotentials

    Get PDF
    Kivun täsmällinen arviointi on tärkeää kivunhallinnassa, erityisesti sairaan- hoitoa vaativille ipupotilaille. Kipu on subjektiivista, sillä se ei ole pelkästään aistituntemus, vaan siihen saattaa liittyä myös tunnekokemuksia. Tällöin itsearviointiin perustuvat kipuasteikot ovat tärkein työkalu, niin auan kun potilas pystyy kokemuksensa arvioimaan. Arviointi on kuitenkin haasteellista potilailla, jotka eivät itse pysty kertomaan kivustaan. Kliinisessä hoito- työssä kipua pyritään objektiivisesti arvioimaan esimerkiksi havainnoimalla fysiologisia muuttujia kuten sykettä ja käyttäytymistä esimerkiksi potilaan kasvonilmeiden perusteella. Tutkimuksen päätavoitteena on automatisoida arviointiprosessi hyödyntämällä koneoppimismenetelmiä yhdessä biosignaalien prosessointnin kanssa. Tavoitteen saavuttamiseksi mitattiin autonomista keskushermoston toimintaa kuvastavia biopotentiaaleja: sydänsähkökäyrää, galvaanista ihoreaktiota ja kasvolihasliikkeitä mittaavaa lihassähkökäyrää. Mittaukset tehtiin terveillä vapaaehtoisilla, joille aiheutettiin kokeellista kipuärsykettä. Järestelmän kehittämiseen tarvittavaa tietokantaa varten rakennettiin biopotentiaaleja keräävä Internet of Things -pohjainen tallennusjärjestelmä. Koostetun tietokannan avulla kehitettiin biosignaaleille prosessointimenetelmä jatku- vaan kivun arviointiin. Signaaleista eroteltiin piirteitä sekuntitasoon mukautetuilla aikaikkunoilla. Piirteet visualisoitiin ja tarkasteltiin eri luokittelijoilla kivun ja kiputason tunnistamiseksi. Parhailla luokittelumenetelmillä saavutettiin kivuntunnistukseen 90% herkkyyskyky (sensitivity) ja 84% erottelukyky (specificity) ja kivun voimakkuuden arviointiin 62,5% tarkkuus (accuracy). Tulokset vahvistavat kyseisen käsittelytavan käyttökelpoisuuden erityis- esti tunnistettaessa kipua yksittäisessä arviointi-ikkunassa. Tutkimus vahvistaa biopotentiaalien avulla kehitettävän automatisoidun kivun arvioinnin toteutettavuuden kokeellisella kivulla, rohkaisten etenemään todellisen kivun tutkimiseen samoilla menetelmillä. Menetelmää kehitettäessä suoritettiin lisäksi vertailua ja yhteenvetoa automaattiseen kivuntunnistukseen kehitettyjen eri tutkimusten välisistä samankaltaisuuksista ja eroista. Tarkastelussa löytyi signaalien eroavaisuuksien lisäksi tutkimusmuotojen aiheuttamaa eroa arviointitavoitteisiin, mikä hankaloitti tutkimusten vertailua. Lisäksi pohdit- tiin mitkä perinteisten prosessointitapojen osiot rajoittavat tai edistävät ennustekykyä ja miten, sekä tuoko optimointi läpimurtoa järjestelmän näkökulmasta.Accurate pain assessment plays an important role in proper pain management, especially among hospitalized people experience acute pain. Pain is subjective in nature which is not only a sensory feeling but could also combine affective factors. Therefore self-report pain scales are the main assessment tools as long as patients are able to self-report. However, it remains a challenge to assess the pain from the patients who cannot self-report. In clinical practice, physiological parameters like heart rate and pain behaviors including facial expressions are observed as empirical references to infer pain objectively. The main aim of this study is to automate such process by leveraging machine learning methods and biosignal processing. To achieve this goal, biopotentials reflecting autonomic nervous system activities including electrocardiogram and galvanic skin response, and facial expressions measured with facial electromyograms were recorded from healthy volunteers undergoing experimental pain stimulus. IoT-enabled biopotential acquisition systems were developed to build the database aiming at providing compact and wearable solutions. Using the database, a biosignal processing flow was developed for continuous pain estimation. Signal features were extracted with customized time window lengths and updated every second. The extracted features were visualized and fed into multiple classifiers trained to estimate the presence of pain and pain intensity separately. Among the tested classifiers, the best pain presence estimating sensitivity achieved was 90% (specificity 84%) and the best pain intensity estimation accuracy achieved was 62.5%. The results show the validity of the proposed processing flow, especially in pain presence estimation at window level. This study adds one more piece of evidence on the feasibility of developing an automatic pain assessment tool from biopotentials, thus providing the confidence to move forward to real pain cases. In addition to the method development, the similarities and differences between automatic pain assessment studies were compared and summarized. It was found that in addition to the diversity of signals, the estimation goals also differed as a result of different study designs which made cross dataset comparison challenging. We also tried to discuss which parts in the classical processing flow would limit or boost the prediction performance and whether optimization can bring a breakthrough from the system’s perspective

    Using MapReduce Streaming for Distributed Life Simulation on the Cloud

    Get PDF
    Distributed software simulations are indispensable in the study of large-scale life models but often require the use of technically complex lower-level distributed computing frameworks, such as MPI. We propose to overcome the complexity challenge by applying the emerging MapReduce (MR) model to distributed life simulations and by running such simulations on the cloud. Technically, we design optimized MR streaming algorithms for discrete and continuous versions of Conway’s life according to a general MR streaming pattern. We chose life because it is simple enough as a testbed for MR’s applicability to a-life simulations and general enough to make our results applicable to various lattice-based a-life models. We implement and empirically evaluate our algorithms’ performance on Amazon’s Elastic MR cloud. Our experiments demonstrate that a single MR optimization technique called strip partitioning can reduce the execution time of continuous life simulations by 64%. To the best of our knowledge, we are the first to propose and evaluate MR streaming algorithms for lattice-based simulations. Our algorithms can serve as prototypes in the development of novel MR simulation algorithms for large-scale lattice-based a-life models.https://digitalcommons.chapman.edu/scs_books/1014/thumbnail.jp

    Target and Non-Target Approaches for Food Authenticity and Traceability

    Get PDF
    Over the last few years, the subject of food authenticity and food fraud has received increasing attention from consumers and other stakeholders, such as government agencies and policymakers, control labs, producers, industry, and the research community. Among the different approaches aiming to identify, tackle, and/or deter fraudulent practices in the agri-food sector, the development of new, fast, and accurate methodologies to evaluate food authenticity is of major importance. This book, entitled “Target and Non-Target Approaches for Food Authenticity and Traceability”, gathers original research and review papers focusing on the development and application of both targeted and non-targeted methodologies applied to verify food authenticity and traceability. The contributions regard different foods, among which some are frequently considered as the most prone to adulteration, such as olive oil, honey, meat, and fish. This book is intended for readers aiming to enrich their knowledge through reading contemporary and multidisciplinary papers on the topic of food authentication
    corecore