37 research outputs found

    Investigation into scalable energy and performance models for many-core systems

    Get PDF
    PhD ThesisIt is likely that many-core processor systems will continue to penetrate emerging embedded and high-performance applications. Scalable energy and performance models are two critical aspects that provide insights into the conflicting trade-offs between them with growing hardware and software complexity. Traditional performance models, such as Amdahl’s Law, Gustafson’s and Sun-Ni’s, have helped the research community and industry to better understand the system performance bounds with given processing resources, which is otherwise known as speedup. However, these models and their existing extensions have limited applicability for energy and/or performance-driven system optimization in practical systems. For instance, these are typically based on software characteristics, assuming ideal and homogeneous hardware platforms or limited forms of processor heterogeneity. In addition, the measurement of speedup and parallelization factors of an application running on a specific hardware platform require instrumenting the original software codes. Indeed, practical speedup and parallelizability models of application workloads running on modern heterogeneous hardware are critical for energy and performance models, as they can be used to inform design and control decisions with an aim to improve system throughput and energy efficiency. This thesis addresses the limitations by firstly developing novel and scalable speedup and energy consumption models based on a more general representation of heterogeneity, referred to as the normal form heterogeneity. A method is developed whereby standard performance counters found in modern many-core platforms can be used to derive speedup, and therefore the parallelizability of the software, without instrumenting applications. This extends the usability of the new models to scenarios where the parallelizability of software is unknown, leading to potentially Run-Time Management (RTM) speedup and/or energy efficiency optimization. The models and optimization methods presented in this thesis are validated through extensive experimentation, by running a number of different applications in wide-ranging concurrency scenarios on a number of different homogeneous and heterogeneous Multi/Many Core Processor (M/MCP) systems. These include homogeneous and heterogeneous architectures and viii range from existing off-the-shelf platforms to potential future system extensions. The practical use of these models and methods is demonstrated through real examples such as studying the effectiveness of the system load balancer. The models and methodologies proposed in this thesis provide guidance to a new opportunities for improving the energy efficiency of M/MCP systemsHigher Committee of Education Development (HCED) in Ira

    A VHDL Model for Implementation of MD5 Hash Algorithm

    Get PDF
    With the increase of the amount of data and users in the information systems, the requirement of data integrity is needed to be improved as well, so the work has become necessary independently. One important element in the information system is a key of authentication schemes, which is used as a message authentication code (MAC). One technique to produce a MAC is based on using a hash function and is referred to as a HMAC.MD5 represents one efficient algorithms for hashing the data, then, the purpose of implementation and used this algorithm is to give them some privacy in the application. Where they become independent work accessories as much as possible, but what is necessary, such as RAM and the pulse generator. Therefore, we focus on the application of VHDL for implement and computing to MD5 for data integrity checking method and to ensure that the data of an information system is in a correct state. The implementation of MD5 algorithm by using Xilinx-spartan-3A XCS1400AFPGA, with 50 MHz internal clock is helping for satisfies the above requirements

    Detection and segmentation the affected brain using ThingSpeak platform based on IoT cloud analysis

    Get PDF
    The world has accelerated around a new industrial revolution called the Internet of Things, as this technology is expected to enter all aspects of industrial life, commercial and civil applications. The Internet of Things stands for highly important applications in the world of medical applications, which is the access to linking all medical clinics in the world into a single network capable of analyzing patient data and presenting it to medical professionals anywhere in the world. One of the medical applications in the Internet of Things is the discovery of a healthy human brain. This work proposes a health care system based on medical image analysis processes in the programmable ThingSpeak platform using MATLAB built into the platform within the cloud. The analysis is done using the MATLAB program within the Windows operating system and then the analysis is performed within ThingSpeak platform. The analysis includes classification process by using SVM classifier linear kernel in which we achieved 99.4% classification rate as well as using RBF kernel, which achieved 98.6% classification accuracy in classifying infected brains from healthy ones and the work was supported by cross validation technology to ensure effective classification accuracy. The patient brain is segmented then the tumor segment is isolated, its area is calculated, and the tumor boundaries are found, based on the k-mean technique, to support the specialist doctor when performing the analysis process in the cloud environment. Through this work we achieved a match in the analysis processes within the local environment, and ThingSpeak platform environment by 100%, and in order to support our work, we have automated the analysis, visualization and data transfer processes within the cloud and MATLAB environment

    A mechanism design-based secure architecture for mobile ad hoc networks

    Get PDF
    International audienceTo avoid the single point of failure for the certificate authority (CA) in MANET, a decentralized solution is proposed where nodes are grouped into different clusters. Each cluster should contain at least two confident nodes. One is known as CA and the another as register authority RA. The Dynamic Demilitarized Zone (DDMZ) is proposed as a solution for protecting the CA node against potential attacks. It is formed from one or more RA node. The problems of such a model are: (1) Clusters with one confident node, CA, cannot be created and thus clusters' sizes are increased which negatively affect clusters' services and stability. (2) Clusters with high density of RA can cause channel collision at the CA. (3) Clusters' lifetime are reduced since RA monitors are always launched (i.e., resource consumption). In this paper, we propose a model based on mechanism design that will allow clusters with single trusted node (CA) to be created. Our mechanism will motivate nodes that does not belong to the confident community to participate by giving them incentives in the form of trust, which can be used for cluster's services. To achieve this goal, a RA selection algorithm is proposed that selects nodes based on a predefined selection criteria function. Finally, empirical results are provided to support our solutions

    A Prediction Model of Power Consumption in Smart City Using Hybrid Deep Learning Algorithm

    Get PDF
    A smart city utilizes vast data collected through electronic methods, such as sensors and cameras, to improve daily life by managing resources and providing services. Moving towards a smart grid is a step in realizing this concept. The proliferation of smart grids and the concomitant progress made in the development of measuring infrastructure have garnered considerable interest in short-term power consumption forecasting. In reality, predicting future power demands has shown to be a crucial factor in preventing energy waste and developing successful power management techniques. In addition, historical time series data on energy consumption may be considered necessary to derive all relevant knowledge and estimate future use. This research paper aims to construct and compare with original deep learning algorithms for forecasting power consumption over time. The proposed model, LSTM-GRU-PPCM, combines the Long -Short-Term -Memory (LSTM) and Gated- Recurrent- Unit (GRU) Prediction Power Consumption Model. Power consumption data will be utilized as the time series dataset, and predictions will be generated using the developed model. This research avoids consumption peaks by using the proposed LSTM-GRU-PPCM neural network to forecast future load demand. In order to conduct a thorough assessment of the method, a series of experiments were carried out using actual power consumption data from various cities in India. The experiment results show that the LSTM-GRU-PPCM model improves the original LSTM forecasting algorithms evaluated by Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) for various time series. The proposed model achieved a minimum error prediction of MAE=0.004 and RMSE=0.032, which are excellent values compared to the original LSTM. Significant implications for power quality management and equipment maintenance may be expected from the LSTM-GRU-PPCM approach, as its forecasts will allow for proactive decision-making and lead to load shedding when power consumption exceeds the allowed leve

    Constructing a new mixed probability distribution with fuzzy reliability estimation

    Get PDF
    This paper deals with constructing mixed probability distribution from mixing exponential (β) and Rayleigh along with β. Accordingly, the mixing proportions are ( α/(α+1) ) and ( 1/(α+1) ).At that point, the mixed PDF and CDF were investigated in this study. The mixed reliability has determined based on estimating its two parameters (α,β) by three different methods, which are maximum likelihood, moments and percentiles method. The fuzzy reliability estimators are compared and the results of comparison are explained based on simulation procedure with detailed tables

    Intelligent and secure real-time auto-stop car system using deep-learning models

    Get PDF
    In this study, we introduce an innovative auto-stop car system empowered by deep learning technology, specifically employing two Convolutional Neural Networks (CNNs) for face recognition and travel drowsiness detection. Implemented on a Raspberry Pi 4, our system is designed to cater exclusively to certified drivers, ensuring enhanced safety through intelligent features. The face recognition CNN model accurately identifies authorized drivers, employing deep learning techniques to verify their identity before granting access to vehicle functions. This first model demonstrates a remarkable accuracy rate of 99.1%, surpassing existing solutions in secure driver authentication. Simultaneously, our second CNN focuses on real-time detecting+ of driver drowsiness, monitoring eye movements, and utilizing a touch sensor on the steering wheel. Upon detecting signs of drowsiness, the system issues an immediate alert through a speaker, initiating an emergency park and sending a distress message via Global Positioning System (GPS). The successful implementation of our proposed system on the Raspberry Pi 4, integrated with a real-time monitoring camera, attains an impressive accuracy of 99.1% for both deep learning models. This performance surpasses current industry benchmarks, showcasing the efficacy and reliability of our solution. Our auto-stop car system advances user convenience and establishes unparalleled safety standards, marking a significant stride in autonomous vehicle technology

    Comparison of Weibull and Fréchet distributions estimators to determine the best areas of rainfall in Iraq

    Get PDF
    In this research, an appropriate distribution of the amount of rain will be found in the Iraqi governorates for the period (2006-2014) and the researcher used two important distributions, namely, the Weibull distribution and the Fréchet distribution. Where the specific distribution was determined based on the minimum criteria (the criteria of goodness of fit) and the tests used are the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC). Rainfall in the Iraqi governorates for the stations (Mosul, Kirkuk, Tikrit, Khanaqin, Rutba, Baghdad, Karbala) is a Weibull distribution using the greatest possible estimation method, while the stations in other provinces (Najaf, Diwaniyah, Maysan, Basra) the Fréchet distribution was the distribution It is better to represent the data of these stations using the method of estimating the greatest possible as well. We also note the superiority of the method of maximum likelihood of least squares

    Technical Report on Deploying a highly secured OpenStack Cloud Infrastructure using BradStack as a Case Study

    Full text link
    Cloud computing has emerged as a popular paradigm and an attractive model for providing a reliable distributed computing model.it is increasing attracting huge attention both in academic research and industrial initiatives. Cloud deployments are paramount for institution and organizations of all scales. The availability of a flexible, free open source cloud platform designed with no propriety software and the ability of its integration with legacy systems and third-party applications are fundamental. Open stack is a free and opensource software released under the terms of Apache license with a fragmented and distributed architecture making it highly flexible. This project was initiated and aimed at designing a secured cloud infrastructure called BradStack, which is built on OpenStack in the Computing Laboratory at the University of Bradford. In this report, we present and discuss the steps required in deploying a secured BradStack Multi-node cloud infrastructure and conducting Penetration testing on OpenStack Services to validate the effectiveness of the security controls on the BradStack platform. This report serves as a practical guideline, focusing on security and practical infrastructure related issues. It also serves as a reference for institutions looking at the possibilities of implementing a secured cloud solution.Comment: 38 pages, 19 figures

    Risk Factors Associated with Hypertensive Patients at Baquba Teaching Hospital

    Get PDF
    Background:Hypertension is one of the most common disorders affecting on the heart,blood vessels , brain and kidney and it is considered is a very common disease and frequent with diabetes , Conversely, it is responsible for one of four premature deaths in developed countries. Objective: To evaluate the most important factors that cause hypertensive  and to study their effects on patients . Patients and Methods: This study was conducted at Baquba Teaching Hospital in the recovery unit for the period from 1/10/2017 until 1/3/2018 The study included 100 patients with hypertensive (44 male , 56 female) to compare them  with 25 healthy persons (11 male , 14female) , was measuring blood pressure and has been done a questionnaire for each patient included (Age, BMI , Smoking , Number of hours sleep , Drinking beverages  , Chronic diseases)  and then by revulsion (2 cc) of blood to measured fasting blood glucose . Results: The results of the study indicates that there is a significant differences at p<0.05 levels of systolic blood pressure , fasting blood glucose , hours of sleep , soft drinks , chronic diseases , smoking . Conclusion: The increase in smoking , soft drinks affects on the level of blood pressure directly, and the decrease of sleep associated with high blood pressure , and most diabetics patients  are more prone to hypertension
    corecore