838 research outputs found

    Use of artificial neural network for medical risk assessment analysis

    Get PDF
    For new medical products and new drugs, unanticipated side effects that rise after consuming the new product is a dominant factor in decision making. In this project, an artificial neural network (NN) engine is designed and developed by the authors to the aim of a medical risk assessment. Firstly, an appropriate NN system is designed and trained. We mostly concerned with the procedure of how the developed NN construction and training. The designed NN for this case has three layers of neuron. These three layers include an input layer, a hidden layer and finally an output layer, with 25 neurons in the hidden layer. The results from NN models can match the data used for training. Hafshejani M K, Sattari Naeini M, Mohammadsharifi A, Langari A. Use of Artificial Neural Network for Medical Risk Assessment Analysis. Life Sci J 2012;9(4):923-925] (ISSN:1097-8135). http://www.lifesciencesite.com. 14

    An end-user platform for FPGA-based design and rapid prototyping of feedforward artificial neural networks with on-chip backpropagation learning

    Get PDF
    The hardware implementation of an artificial neural network (ANN) using field-programmable gate arrays (FPGAs) is a research field that has attracted much interest and attention. With the developments made, the programmer is now forced to face various challenges, such as the need to master various complex hardware-software development platforms, hardware description languages, and advanced ANN knowledge. Moreover, such an implementation is very time consuming. To address these challenges, this paper presents a novel neural design methodology using a holistic modeling approach. Based on the end-user programming concept, the presented solution empowers end users by means of abstracting the low-level hardware functionalities, streamlining the FPGA design process and supporting rapid ANN prototyping. A case study of an ANN as a pattern recognition module of an artificial olfaction system trained to identify four coffee brands is presented. The recognition rate versus training data features and data representation was analyzed extensively

    An Overview of Multi-Processor Approximate Message Passing

    Full text link
    Approximate message passing (AMP) is an algorithmic framework for solving linear inverse problems from noisy measurements, with exciting applications such as reconstructing images, audio, hyper spectral images, and various other signals, including those acquired in compressive signal acquisiton systems. The growing prevalence of big data systems has increased interest in large-scale problems, which may involve huge measurement matrices that are unsuitable for conventional computing systems. To address the challenge of large-scale processing, multiprocessor (MP) versions of AMP have been developed. We provide an overview of two such MP-AMP variants. In row-MP-AMP, each computing node stores a subset of the rows of the matrix and processes corresponding measurements. In column- MP-AMP, each node stores a subset of columns, and is solely responsible for reconstructing a portion of the signal. We will discuss pros and cons of both approaches, summarize recent research results for each, and explain when each one may be a viable approach. Aspects that are highlighted include some recent results on state evolution for both MP-AMP algorithms, and the use of data compression to reduce communication in the MP network

    An Examination of How Robots, Artificial Intelligence, and Machinery Learning are Being Applied in the Medical and Healthcare Industries

    Get PDF
    Machine learning techniques are associated with diagnostics systems to apply methods that enable computers to link patient data to earlier data and give instructions to correct the disease.In recent years, researchers have promoted two or three data mining based techniques for disease diagnosis. Each function in machine learning and data mining techniques is built through characteristics and features.As a part of prognosis, information must be separated from patient data and information retrieved in stored databases and comparative records. For any disease, early diagnosis or diagnosis will determine the chances of a correct recovery. Disease prediction therefore becomes a more important task to support physicians in delivering efficient treatment to people.In health care, data is being created and disposed of at an extraordinary rate compared to the health care sectors. Data for medical profiling is often found in a variety of sources such as electronic health records, lab and imaging systems, doctor notes and accounts. The medical records database will then contain irrelevant data sourced from multiple sources. Preprocessing data and eliminating irrelevant data then immediately opening it up for predictive analysis is one of the significant difficulties of the health care industry

    Analyzing the Resilience of Convolutional Neural Networks Implemented on GPUs: Alexnet as a Case Study

    Get PDF
    There have been an extensive use of Convolutional Neural Networks (CNNs) in healthcare applications. Presently, GPUs are the most prominent and dominated DNN accelerators to increase the execution speed of CNN algorithms to improve their performance as well as the Latency. However, GPUs are prone to soft errors. These errors can impact the behaviors of the GPU dramatically. Thus, the generated fault may corrupt data values or logic operations and cause errors, such as Silent Data Corruption. unfortunately, soft errors propagate from the physical level (microarchitecture) to the application level (CNN model). This paper analyzes the reliability of the AlexNet model based on two metrics: (1) critical kernel vulnerability (CKV) used to identify the malfunction and light- malfunction errors in each kernel, and (2) critical layer vulnerability (CLV) used to track the malfunction and light-malfunction errors through layers. To achieve this, we injected the AlexNet which was popularly used in healthcare applications on NVIDIA’s GPU, using the SASSIFI fault injector as the major evaluator tool. The experiments demonstrate through the average error percentage that caused malfunction of the models has been reduced from 3.7% to 0.383% by hardening only the vulnerable part with the overhead only 0.2923%. This is a high improvement in the model reliability for healthcare applications

    PERFORMANCE ANALYSIS AND FITNESS OF GPGPU AND MULTICORE ARCHITECTURES FOR SCIENTIFIC APPLICATIONS

    Get PDF
    Recent trends in computing architecture development have focused on exploiting task- and data-level parallelism from applications. Major hardware vendors are experimenting with novel parallel architectures, such as the Many Integrated Core (MIC) from Intel that integrates 50 or more x86 processors on a single chip, the Accelerated Processing Unit from AMD that integrates a multicore x86 processor with a graphical processing unit (GPU), and many other initiatives from other hardware vendors that are underway. Therefore, various types of architectures are available to developers for accelerating an application. A performance model that predicts the suitability of the architecture for accelerating an application would be very helpful prior to implementation. Thus, in this research, a Fitness model that ranks the potential performance of accelerators for an application is proposed. Then the Fitness model is extended using statistical multiple regression to model both the runtime performance of accelerators and the impact of programming models on accelerator performance with high degree of accuracy. We have validated both performance models for all the case studies. The error rate of these models, calculated using the experimental performance data, is tolerable in the high-performance computing field. In this research, to develop and validate the two performance models we have also analyzed the performance of several multicore CPUs and GPGPU architectures and the corresponding programming models using multiple case studies. The first case study used in this research is a matrix-matrix multiplication algorithm. By varying the size of the matrix from a small size to a very large size, the performance of the multicore and GPGPU architectures are studied. The second case study used in this research is a biological spiking neural network (SNN), implemented with four neuron models that have varying requirements for communication and computation making them useful for performance analysis of the hardware platforms. We report and analyze the performance variation of the four popular accelerators (Intel Xeon, AMD Opteron, Nvidia Fermi, and IBM PS3) and four advanced CPU architectures (Intel 32 core, AMD 32 core, IBM 16 core, and SUN 32 core) with problem size (matrix and network size) scaling, available optimization techniques and execution configuration. This thorough analysis provides insight regarding how the performance of an accelerator is affected by problem size, optimization techniques, and accelerator configuration. We have analyzed the performance impact of four popular multicore parallel programming models, POSIX-threading, Open Multi-Processing (OpenMP), Open Computing Language (OpenCL), and Concurrency Runtime on an Intel i7 multicore architecture; and, two GPGPU programming models, Compute Unified Device Architecture (CUDA) and OpenCL, on a NVIDIA GPGPU. With the broad study conducted using a wide range of application complexity, multiple optimizations, and varying problem size, it was found that according to their achievable performance, the programming models for the x86 processor cannot be ranked across all applications, whereas the programming models for GPGPU can be ranked conclusively. We also have qualitatively and quantitatively ranked all the six programming models in terms of their perceived programming effort. The results and analysis in this research indicate and are supported by the proposed performance models that for a given hardware system, the best performance for an application is obtained with a proper match of programming model and architecture

    Prediction and Decision Making in Health Care using Data Mining

    Get PDF
    Tendency for data mining application in healthcare today is great, because healthcare sector is rich with information, and data mining is becoming a necessity. Healthcare organizations produce and collect large volumes of information on daily basis. Use of information technologies allows automatization of processes for extraction of data that help to get interesting knowledge and regularities, which means the elimination of manual tasks and easier extraction of data directly from electronic records, transferring onto secure electronic system of medical records which will save lives and reduce the cost of the healthcare services, as well and early discovery of contagious diseases with the advanced collection of data. Data mining can enable healthcare organizations to predict trends in the patient conditions and their behaviors, which is accomplished by data analysis from different perspectives and discovering connections and relations from seemingly unrelated information. Raw data from healthcare organizations are voluminous and heterogeneous. They need to be collected and stored in the organized forms, and their integration enables forming of hospital information system. Healthcare data mining provides countless possibilities for hidden pattern investigation from these data sets. These patterns can be used by physicians to determine diagnoses, prognoses and treatments for patients in healthcare organizations.DOI: http://dx.doi.org/10.11591/ijphs.v1i2.138

    SCS: 60 years and counting! A time to reflect on the Society's scholarly contribution to M&S from the turn of the millennium.

    Get PDF
    The Society for Modeling and Simulation International (SCS) is celebrating its 60th anniversary this year. Since its inception, the Society has widely disseminated the advancements in the field of modeling and simulation (M&S) through its peer-reviewed journals. In this paper we profile research that has been published in the journal SIMULATION: Transactions of the Society for Modeling and Simulation International from the turn of the millennium to 2010; the objective is to acknowledge the contribution of the authors and their seminal research papers, their respective universities/departments and the geographical diversity of the authors' affiliations. Yet another objective is to contribute towards the understanding of the overall evolution of the discipline of M&S; this is achieved through the classification of M&S techniques and its frequency of use, analysis of the sectors that have seen the predomination application of M&S and the context of its application. It is expected that this paper will lead to further appreciation of the contribution of the Society in influencing the growth of M&S as a discipline and, indeed, in steering its future direction
    corecore