896 research outputs found

    Identifying opportunities for developing CSP and PV-CSP hybrid projects under current tender conditions and market perspectives in MENA – benchmarking with PV-CCGT

    Get PDF
    Concentrating solar power (CSP) is one of the promising renewable energy technologies provided the fact that it is equipped with a cost-efficient storage system, thermal energy storage (TES). This solves the issue of intermittency of other renewable energy technologies and gives the advantage of achieving higher capacity factors and lower levelized costs of electricity (LCOE). This is the main reason why solar tower power plants (STPP) with molten salts and integrated TES are considered one of the most promising CSP technologies in the short term [1]. On the other hand, solar photovoltaic (PV) is a technology whose costs have been decreasing and are expected to continue doing so thus providing competitive LCOE values, but with relatively low capacity factors as electrical storage systems remain not cost-effective. Combining advantages and eliminating drawbacks of both technologies (CSP and PV), Hybridized PV-CSP power plants can be deemed as a competitive economic solution to offer firm output power when CSP is operated smartly so that its load is regulated in response to the PV output. Indeed previous works, have identified that it would allow achieving lower LCOEs than stand-alone CSP plants by means of allowing it to better utilize the solar field for storing energy during the daytime while PV is used [1]. On the fossil-based generation side, the gas turbine combined cycle (CCGT) occupies an outstanding position among power generation technologies. This is due to the fact that it is considered the most efficient fossil fuel-to-electricity converter, in addition to the maturity of such technology, high flexibility, and the generally low LCOE, which is largely dominated by fuel cost and varies depending on the natural gas price at a specific location. Obviously, the main drawback is the generated carbon emissions. In countries rich in natural gas resources and with vast potential for renewable energies implementation, such as the United Arab Emirates (UAE), abandoning a low LCOE technology with competitively low emissions – compared to coal or oil - and heading to costly pure renewable generation, seems like an aggressive plan. Therefore, hybridizing CCGT with renewable generation can be considered an attractive option for reducing emissions at reasonable costs. This is the case of the UAE with vast resources of both natural gas and solar energy. Previous work have shown the advantages of hybrid PV-CCGT and hybrid PV-CSP plants separately [1][2]. In this thesis, CSP and the two hybrid systems are compared on the basis of LCOE and CO2 emissions for a same firm-power capacity factor when considering a location in the UAE. The results are compared against each other to highlight the benefits of each technology from both environmental and economic standpoints and provide recommendations for future work in the field. The techno-economic analysis of CSP (STPP with TES), PV-CSP(STPP with TES) and PV-CCGT power plants have been performed by DYESOPT, an in-house tool developed in KTH, which runs techno-economic performance evaluation of power plants through multi-objective optimization for specific locations[1]. For this thesis, a convenient location in the UAE was chosen for simulating the performance of the plants. The UAE is endowed by the seventh-largest proven natural gas reserves and average to high global horizontal irradiation (GHI) and direct normal irradiation (DNI) values all year round, values considered to be lower than other countries in the MENA region due to its high aerosol concentrations and sand storms. The plants were designed to provide firm power in two cases, first as baseload, and second as intermediate load of 15 hours from 6:00 until 21:00. The hours of production were selected based on a typical average daily load profile. CSP and PV-CSP model previously developed by [3][1] were used. Ideally in the PV-CSP model, during daytime hours the PV generation is used for electricity production, covering the desired load, while CSP is used partly for electricity production and the rest for storing energy in the TES. Energy in the TES system is then used to supply firm power during both periods of low Irradiance and night hours or according to need. A PV-CCGT model has been developed which operates simultaneously, prioritizing the availability of PV while the CCGT fulfils the remaining requirement. There is a minimum loading for the CCGT plant which is determined by the minimum possible partial loading of the gas turbine restricted by the emission constraints. Accordingly, in some cases during operation PV is chosen to be curtailed due to this limitation. The main results of the techno-economic analysis are concluded in the comparative analysis of the 3 proposed power plant configurations, where the PV-CCGT plant is the most economic with minimum LCOE of 86 USD/MWh, yet, the least preferable option in terms of carbon emissions. CSP and PV-CSP provided higher LCOE, while the PV-CSP plant configuration met the same capacity factor with 11% reduction in LCOE, compared to CSP

    Secured Data Masking Framework and Technique for Preserving Privacy in a Business Intelligence Analytics Platform

    Get PDF
    The main concept behind business intelligence (BI) is how to use integrated data across different business systems within an enterprise to make strategic decisions. It is difficult to map internal and external BI’s users to subsets of the enterprise’s data warehouse (DW), resulting that protecting the privacy of this data while maintaining its utility is a challenging task. Today, such DW systems constitute one of the most serious privacy breach threats that an enterprise might face when many internal users of different security levels have access to BI components. This thesis proposes a data masking framework (iMaskU: Identify, Map, Apply, Sign, Keep testing, Utilize) for a BI platform to protect the data at rest, preserve the data format, and maintain the data utility on-the-fly querying level. A new reversible data masking technique (COntent BAsed Data masking - COBAD) is developed as an implementation of iMaskU. The masking algorithm in COBAD is based on the statistical content of the extracted dataset, so that, the masked data cannot be linked with specific individuals or be re-identified by any means. The strength of the re-identification risk factor for the COBAD technique has been computed using a supercomputer where, three security scheme/attacking methods are considered, a) the brute force attack, needs, on average, 55 years to crack the key of each record; b) the dictionary attack, needs 231 days to crack the same key for the entire extracted dataset (containing 50,000 records), c) a data linkage attack, the re-identification risk is very low when the common linked attributes are used. The performance validation of COBAD masking technique has been conducted. A database schema of 1GB is used in TPC-H decision support benchmark. The performance evaluation for the execution time of the selected TPC-H queries presented that the COBAD speed results are much better than AES128 and 3DES encryption. Theoretical and experimental results show that the proposed solution provides a reasonable trade-off between data security and the utility of re-identified data

    Microwave Processing Of Fiber Reinforced Composites (Optimization of Glass Reinforced Epoxy Curing Process)

    Get PDF
    Microwave curing of polymer matrix composites has proven to be an attractive substitute for conventional thermal curing. Industrial applications are currently developed including telecommunications, aerospace, food industry, enhancement concrete setting, composites manufacturing, and many others. Many universities and research centers around the globe are endeavoring to make use of this technology to the most. Common research objectives include homogeneity of the cure, the acceleration of cure kinetics, cure reaction mechanism, and enhancement of mechanical properties. In order to efficiently utilize this form of energy, precise control over power, temperature, and time were applied to achieve set goals: reduce cure time and thermal overshoots, assure complete cure, and maximize mechanical properties. This work discusses an optimization scenario to achieve these set goals by combining data from calorimetric analysis, insitu temperature and power monitoring, and energy conservation studies. An experimental setup is assembled consisting of laboratory equipped multi mode microwave applicator and programmable feedback controllers. For thermal curing, a typical electric furnace is used with three thermocouples measuring the cavity, mold, and sample temperatures. Test samples consisted of both neat blend of DGEBA resin together with samples of glass fiber reinforced epoxy. Prior to testing, the microwave cavity has been calibrated to approximate heat losses in the system and thus determine the expected data accuracy. Curing experiments for a specific temperature-time profile show that microwave applicator not only follows the set temperature but also eliminates thermal lag and temperature overshoot. While holdback technique could not deliver the required cure cycle, PID control strategy succeeded in homogenously curing successful epoxy and epoxy/fiberglass samples. Kinetic knowledge is enriched using DSC to determine expected curing times at different curing temperatures. Based on these data, a selected isothermal temperature of 100°C was used with variable dwell times between 13-30 minutes for microwave curing. Mechanical testing data shows that microwave cured samples have relatively exceeded the conventionally cured ones in both flexural strength and modulus. The DSC recommended time of cure 13 minutes, at 100 °C, is a good approximate which suggests similar curing mechanism of cure kinetics in both thermal and microwave· methods. High ramp rate, 200 °C/min could also be achieved without material degradation or temperature overshoot by carefully controlling power during the ramp stage. Effect of gelation time and vacuum degassing, being a major time saving area, were also tested. The gelation time has particularly enhanced the flexural modulus of the epoxy samples. In short, the use of efficient process controller resulted in superior mechanical properties at practically optimum time durations

    Some Physical and Environmental Aspects of Shallow Ice Covered Lakes : Emphasis on Lake Vendyurskoe, Karelia, Russia.

    Get PDF
    The ice cover presence on the surface of a lake insulates the water body from the atmosphere. This prevents or reduces the influence of processes which depend on the exchange between the atmosphere and the water surface. This implies significant reduction or elimination of some processes taking place at the open water surface; for instance, a substantial reduction in solar energy penetrating to the water body. This reduction of heat input from the atmosphere is partly compensated for by heat flux from sediment. Wind induced currents are replaced by oscillating currents due to wind action on the ice cover. Sediment heat flux generates slow density currents along sloping bottoms. This study is almost exclusively devoted to Lake Vendyurskoe, a small lake in the Russian Republic of Karelia. On Lake Vendyurskoe, ice is usually formed in November-December and it remains for a duration of six to seven months. The maximum ice thickness is 60-80 cm. The ice growth can be well described using a degree day equation. The water temperature was measured continuously in several vertical profiles in the lake. Heat fluxes at the water-ice interface and at the sediment-water interface were determined by measuring temperature gradients. Due to gain of heat from the sediments, the heat content of the ice covered lake increased throughout the ice covered period. At the time of freeze-over, the water temperature is less than 0.5 oC all the way to the bottom. In April, the temperature profile is almost linear from 0 oC at the underside of the ice to 4 oC or more at depth below 8 m. The heat flux conducted from water to ice soon after ice formation is about 1 Wm-2 , but it increases in the course of the winter and can reach 5 W/m2 in early spring. The sediment heat flux to water increases throughout the winter and is highest in early winter and at shallow bottoms, where the sediment is warmest and the water is coldest. Typical values are, directly after the ice formation, 2-6 W·m-2 and by early spring 1-2 W·m-2. Oscillation of the ice cover due to wind action produces small horizontal currents and mixing. These currents were measured during several campaigns with an acoustic meter. They had an average magnitude of about 2 mmsec-1 and a maximum value of 7 mm.sec-1.In early spring and in absence of snow on the ice, considerable amount of solar radiation penetrates the ice cover and introduces hydrodynamic instability and convective mixing. A vertically homogeneous temperature layer develops, which grows downwards. The depletion of oxygen and development of dissolved oxygen profiles during winter were investigated. The dissolved oxygen content at the time of freeze over was close to saturation over the entire water body. The dissolved oxygen reduces throughout the winter, but much more at deep water than at shallow water. Near the sediments, the concentration drops to low values. It is found that diffusion of oxygen into the sediments is the dominating consumption process. When there is convective mixing under the ice in early spring, the dissolved oxygen is redistributed over the homothermal layer. This can, because of the water movements, be associated with increase of the diffusion into the sediments which leads to a decrease of dissolved oxygen also at shallow water. Keywords: convective mixing, dissolved oxygen, heat exchange ice–water, ice cover, oscillatory currents, sediment heat fluxes, water temperatur

    Association of chronic self-perceived stress with mortality and health status outcomes in patients with peripheral artery disease: insights from the portrait registry

    Get PDF
    Title from PDF of title page viewed January 15, 2020Thesis advisor: Kim SmolderenVitaIncludes bibliographical references (page 50-56)Thesis (M.S.)--School of Medicine. University of Missouri--Kansas City, 2019The prevalence of peripheral artery disease (PAD) is increasing worldwide and is estimated to affect about 360 million patients by 2030. Patients with PAD are at a higher risk of premature mortality and suffer from disability and functional impairment, both of which contribute to the direct and indirect socioeconomic burden of PAD. These trends are occurring despite emphasis towards control of traditional risk factors and interventions to decrease the impact of PAD on patient outcomes. Hence it is critical to identify and study novel risk factors that could impact outcomes in patients with PAD. Chronic mental stress could be one such factor. Mental stress is a potent cardiovascular risk factor and has been associated with development and progression of coronary disease and worse outcomes, including higher risk of mortality and poorer quality of life in patients after a myocardial infarction. However, there is paucity of evidence for the association of chronic mental stress with outcomes in PAD. To address this critical gap in understanding the link between mental stress and outcomes in PAD, we used data from the Patient-centered Outcomes Related to Treatment practices in peripheral Arterial disease: Investigating Trajectories (PORTRAIT), an international registry of patients presenting with symptoms of PAD. Mental stress was quantified at baseline, 3-, 6- and 12-month follow-up using the validated 4-item Perceived Stress Scale (PSS-4). For each patient available PSS-4 scores from all time points were averaged to quantify a subject’s average exposure to mental stress over one year. To examine the association of chronic stress with longitudinal mortality and health status outcomes, we did two separate landmark analysis. First to examine the impact of chronic stress on mortality we did a landmark analysis starting at 12-month follow-up. For each patient we defined chronic stress to be average of PSS-4 score at baseline through 12-months. Cox regression models adjusting for patients’ demographics (age, sex, race), comorbid conditions (diabetes, hypertension, history of myocardial infarction, congestive heart failure, smoking status), baseline ankle-brachial index, invasive treatment for PAD, socioeconomic indicators (highest education level, avoidance of care due to cost and end of the month resources), were used to assess an independent association of average stress (over first year of follow-up) with all-cause mortality over the subsequent four years. Second, to examine the association of chronic stress with 12-month health status outcomes we defined chronic stress exposure to be average PSS-4 score across baseline, 3- and 6-month follow-up assessments. This quantified a patient’s exposure to chronic stress over first 6-months of follow-up. Health status was quantified at baseline and 12-months. PAD specific health status was assessed using the PAD Questionnaire (PAQ). Generic health status was assessed using the EuroQoL Visual Analog Scale (EQ5D VAS). Hierarchical multivariable regression models, with random effects for site and adjustment for country, patients’ demographics, comorbid conditions, baseline ABI, treatment strategy and socioeconomic status-were used to examine independent association of average stress (baseline to 6-months) on recovery in health status at 12-months. In in patients in whom accurate assessment of chronic mental stress and mortality could be made (n=757, mean age 68.5 ± 9.7, 42% females, 28% non-Caucasians), higher average stress scores over 12-months were associated with greater hazards of mortality, in the adjusted model (hazard ratio per +1 unit increase in average PSS-4 1.08, 95% CI 1.01, 1.16 p=0.03). Similarly, in patients who had complete assessment of chronic stress over 6-months and health status at baseline and 12-month follow-up (n=1060, mean age 67.7, 37% females, 17.7% non-Caucasian) higher averaged stress scores over 6-months were associated with poorer PAQ summary score at 12-months in completely adjusted models (-1.4 points per +1-point increase in average PSS-4 95% CI -2.1, -0.6 p <0.001). Chronic stress in patients with PAD, is independently associated with higher mortality risk and poorer health status outcomes. These results set the stage for exploring interventions to examine if strategies to reduce chronic stress in patients with PAD improves outcomesIntroduction -- Methodology -- Results -- Discussion -- Appendi

    De novo metagenomic assembly of microbial communities from the lower convective layer of the Red Sea Atlantis II brine environment.

    Get PDF
    The lower convective layer of the Red Sea Atlantis II brine pool (ATII-LCL) is an unexplored environment that is characterized by harsh conditions of high temperature (68ºC), high salinity (26%), high concentration of heavy metals and very low oxygen content. Microbial communities inhabiting this extreme environment are expected to have unique structural and functional adaptations to survive such harsh conditions. These adaptations can be expressed by novel genes or new metabolic pathways. The recent advances in the next generation sequencing technologies have increased the size of the generated reads (500 bps in 454 pyrosequencing) and lowered the sequencing cost per gigabase. As a result, research efforts became more feasible to reveal the mystery of such an interesting environment and to discover novel proteins that might have a useful biotechnological application. This study is the first attempt to establish a metagenomic assembled dataset of the environmental sample taken from the ATII-LCL. Three successive runs of 454 random shotgun sequencing were performed producing a large size dataset of 1.5 Gbs and 4.4 million reads. This approach has been used to increase the sequence coverage of metagenomic datasets in order to overcome the high diversity of some microbial communities. De novo assembly of the pooled reads from all sequencing runs resulted in a 40,693 contigs with maximum contig size of 350 kb. The comparison of different assembly versions of individual runs showed that we have not yet reached a complete coverage of the genomes contained in the metagenomic sample. Also, this metagenomic dataset has shown a high complexity concerning the community structure due to the absence of a dominant taxonomic classification. The taxonomic classification of the assembled dataset has been distributed between three major bacterial orders, Burkholderiales, Rhizobiales and Pseudomonadales and one Archaeal class Euryarchaeota. The newly established dataset has been used to annotate an operon for mercury resistance genes. The annotated Mercuric reductase gene (MerA) has been synthesized and expressed in the lab showing a high enzyme activity compared to its terrestrial peers

    Iterative Time-Varying Filter Algorithm Based on Discrete Linear Chirp Transform

    Full text link
    Denoising of broadband non--stationary signals is a challenging problem in communication systems. In this paper, we introduce a time-varying filter algorithm based on the discrete linear chirp transform (DLCT), which provides local signal decomposition in terms of linear chirps. The method relies on the ability of the DLCT for providing a sparse representation to a wide class of broadband signals. The performance of the proposed algorithm is compared with the discrete fractional Fourier transform (DFrFT) filtering algorithm. Simulation results show that the DLCT algorithm provides better performance than the DFrFT algorithm and consequently achieves high quality filtering.Comment: 6 pages, conference pape

    Unequal error protection for power line communications over impulsive noise channels

    Get PDF
    Power line communication (PLC) has recently attracted a lot of interest with many application areas including smart grids\u27 data communication, where data (from sensors or other measurement units) with different QoS may be transmitted. Power line communications suffer from the excessive power lines\u27 impulsive noise (which can be caused by shedding loads on and off). In this thesis, we present a study of power line communications with unequal error protection for two and four data priority levels hierarchical QAM modulation and space-time block coding. We consider the two commonly used power lines\u27 impulsive noise models with Bernoulli and Poisson arrivals. In our proposed approaches, we achieve UEP on both of bit and symbol levels. Approximate closed form expressions for the error rates are derived for each priority level for both single carrier and OFDM in SISO and MIMO systems. In addition, these simpli fied expressions are used to implement a bit loading algorithm to provide UEP for frequency-selective PLC channels. For the case of MIMO PLC channels, we describe three different MIMO schemes to allow more control over the UEP levels. The three schemes are namely: maximum ratio combiner (MRC) receive diversity, Alamouti space-time block code, and a new structure for a space-time code that allows for unequal error protection at the symbol level. Finally, we apply an Eigen beamforming technique, assuming channel knowledge at transmitter, which improves the BER as compared to the other MIMO PLC schemes

    The isotopic composition of Zn in natural materials

    Get PDF
    This work represents the most recent development of Zn isotopic measurements, and the first identification of Zn isotopic fractionation in natural materials using Thermal Ionisation Mass Spectrometry (TIMS). The procedures developed in this research systematically evaluates and solves several critical analytical issues involved in TIMS Zn isotopic measurements such as, reducing the size of sample needed to perform an accurate and precise measurement, minimizing the effect of interferences on the Zn fractionation, reducing the blank associated with the analyses, dissolution and purification of different natural samples, and the generally ignored issue of the effect of the ion exchange chemistry (Zn separation) to the fractionation of Zn. These procedures have allowed sub-permil fractionations in the isotopic composition of Zn to be revealed in small Zn sample (1µg), and the determination of low level (ng) elemental abundance of Zn in samples to be measured accurately by the means of isotope dilution mass spectrometry IDMS. This thesis uses the rigorous double spike technique to measure fractionation, relative to the internationally proposed absolute Zn isotopic reference material (δ zero), based on a high purity Alfa Aesar 10759, now available to the international isotope community. All the isotopic measurements in natural materials were performed on bulk samples purified by ion exchange chemistry.The isotopic composition of the Zn minerals and igneous rocks agreed with that of the absolute reference material, which makes it possible to consider this reference material as being representative of “bulk Earth” Zn. Significant and consistent fractionation of ~+0.3 ‰ per amu were found in 5 sediments from a range of localities. The consistency of this is attributed to conveyor type oceanic circulations effects. The results from the two metamorphic samples indicate that the fractionation of Zn in these rocks is the same as found in igneous rocks but are different from the Zn found in sedimentary rocks. This supports the widely held assumption that high temperature and pressure processes do not fractionate the isotopic composition of chalcophile elements, such as has been found for Cd. Clay sample TILL-3 appears to exhibit a consistently slightly positive Zn fractionation of +0.12 ± 0.10 ‰ amu-1, although inside the uncertainties of both igneous and sedimentary rocks, which is not surprising since Till is thought to be a formed from a range of mixed glacial sediments The isotopic composition of Zn was measured in two plants and one animal sample. The fractionation of (-0.088 ± 0.070 ‰ amu-1) of Zn in the Rice (a C3 type plant material) sample suggested that Zn may be used to study Zn systematics in plants. The result obtained for MURST-Iss-A2 (Antarctic Krill) was +0.21 ± 0.11 ‰ amu-1 relative to the laboratory standard which is similar to the average Zn fractionation results of +0.281 ± 0.083 ‰ amu-1 obtained for marine sediments.In this work, the isotopic composition of Zn was measured in five stone and two iron meteorites. The range of Zn fractionation in stone meteorites was between -0.287 ± 0.098 and + 0.38 ± 0.16 ‰ amu-1, and was consistent with previous work, although more measurements would be needed to generalize this to all stone meteorites. In iron meteorites; Canyon Diablo was found to have the greatest fractionation of +1.11 ± 0.11 ‰ amu-1 relative to the laboratory standard. Of all the meteorites studied, Redfields clearly showed an anomalous isotopic composition indicating that this meteorite possesses a significantly different Zn isotopic composition compared to all of the other natural materials measured. Using 64Zn as a reference isotope, significant differences relative to the laboratory standard were found of +5.6 ± 0.4‰, +4.4 ± 3.6 ‰, and +21.0± 0.9 ‰ and +27.4 ± 18.8 ‰ on 66Zn and 67Zn, 68Zn and 70Zn respectively. These significant “Redfields anomalies” can be interpreted in a number of ways in relation to their nucleosynthetic production. Whether Redfields is a primitive type of iron meteorite or not, the Redfields anomaly strongly suggests wide spread isotopic heterogeneity of at least one part of the Solar System and does not support the suggestion that “Zn was derived from an initially single homogeneous reservoir in the early Solar System”. A pilot study to determine the concentration and the isotopic composition of Zn in River and tap water was performed.The concentration of Zn in River water averaged 6.9 ± 0.8 ngg-1, while for tap water it ranged from 13.1 ngg-1 to 5.2 μgg-1. River water was fractionated by -1.09 ± 0.70 ‰ amu-1, while restrained tap water yielded the maximum fractionation of -6.39 ± 0.62 ‰ amu-1 relative to the laboratory standard. The Zn fractionation of tap water is much larger than all other natural samples, although the uncertainty is also significantly greater due to the use of the less precise Daly detector used for these preliminary experimental measurements. The fractionation of Zn in seven ultra pure Zn standard materials was measured relative to the laboratory standard and found to range from -5.11 ± 0.36 ‰ amu-1 for AE 10760 to +0.12 ± 0.16 ‰ amu-1 for Zn IRMM 10440. There appears to be some evidence for a relationship between Zn fractionation and its purity. As well as natural materials, the fractionation of Zn was measured in a number of processed materials. None of these results or those obtained for natural materials impact on the currently IUPAC accepted value for the atomic weight of Zn. Along with fractionation determinations, the concentration of Zn was also measured by Isotope Dilution Mass Spectrometry in all of the samples. The concentration of Zn in five stony meteorites ranged from 26 ± 13 to 302 ± 14 μgg-1 for Plainview and Orgueil respectively. For ordinary Chondrites, the concentration of Zn in the three samples analysed ranged from 26 ± 13 to 64 ± 34 μgg-1 for Plainview and Brownfield 1937 respectively.The concentration of Zn was measured in two metamorphic rocks standard materials; the maximum concentration was 101.5 ± 1.7 µgg-1 in SDC-1. The concentration of Zn present in plant samples studied in this research was 22.15 ± 0.42, 14.62 ± 0.27 µgg-1 for Rice IMEP-19 and Sargasso NIES-Number 9 respectively which is within the normal range of Zn concentrations. Except for meteorites, the final uncertainties consistently cover the ranges of individual concentration measurements and indicate the homogeneity of the samples, including samples from different bottles where available. The final fractional uncertainties obtained for SRMs were all less than 2.8 %, demonstrating the high level of precision possible using IDMS

    Resource Efficient Authentication and Session Key Establishment Procedure for Low-Resource IoT Devices

    Get PDF
    open access journalThe Internet of Things (IoT) can includes many resource-constrained devices, with most usually needing to securely communicate with their network managers, which are more resource-rich devices in the IoT network. We propose a resource-efficient security scheme that includes authentication of devices with their network managers, authentication between devices on different networks, and an attack-resilient key establishment procedure. Using automated validation with internet security protocols and applications tool-set, we analyse several attack scenarios to determine the security soundness of the proposed solution, and then we evaluate its performance analytically and experimentally. The performance analysis shows that the proposed solution occupies little memory and consumes low energy during the authentication and key generation processes respectively. Moreover, it protects the network from well-known attacks (man-in-the-middle attacks, replay attacks, impersonation attacks, key compromission attacks and denial of service attacks)
    corecore