51 research outputs found

    Received Signal Strength Measurement: Suboptimal Handing-over

    Get PDF
    Reception of good quality GSM signal in any area depends on a number of factors-Received Signal Strength (RSSI), the number of TRXs in the cell sites, the quality and type of hand-overs, the call traffic in a cell etc. The impact of these factors has a direct effect on the user experience, the image and patronage of the operator, penalties from regulator etc. In many parts of the world where GSM services are operated, some of the most annoying phenomena include call setup blocking, call drops, inability to initiate calls, low signal level on the user’s mobile. In this paper, Received Signal Strength (RSSI) levels of BTS cells from different Network Operators are measured to determine the level and quality of Received Signals, the ‘dead’ spots around Covenant University environment, to determine the signal strength distribution, and perform a side by side comparison of the signal strength (quality) from these Operators. There are many methods for measuring Received signal strength of GSM/LTE networks, and this include the use of Ericsson TEMS suite software and phone, using signal meters, using Spectrum Analyzers- all these methods have their draw-backs and advantages. In this paper, we have chosen to measure the Received Signal Strength using a Smart Android phone with installed software (KAI BIT Software) for measuring the Received Signal Strength from cell sites and their locations, Cell IDs and Location Area Code (LAC

    Implementing Orthogonal Frequency Division Multiplexing Using IFFT/FFT

    Get PDF
    Orthogonal Frequency Division Multiplexing (OFDM) is a modulation system that offers many advantages over other modulation scheme. OFDM is a particular form of Multi-carrier transmission and is suited for frequency selective channels and high data rates; it overcomes the Inter Symbol Interference(ISI) problem by modulating multiple narrow-band sub-carriers in parallel. In this paper, analysis of OFDM is carried out with emphasis on the implementation using IFFT/FFT as against multiple Oscillators and demodulators. The concept of orthogonality of carriers is used to explain the ability to transmit multi-carriers without interference and the ease of decoupling the signal information at the receivers. Matlab simulations were carried out to show these concept

    Design and Implementation of a Dual Band Mobile Phone Jammer

    Get PDF
    The incessant use of mobile phones can be attributed to its portability and thus have become one of the most widely used devices in mobile communication which makes it so essential in our lives. The convenience and portability of mobile phones has made it possible to be carried everywhere, e.g. Churches, lecture halls, medical centers etc. Its convenience can create inconvenience in some places when there is continuous beeping or ringtones of cell phones which becomes annoying when such noise is disruptive in areas where silence is required or the use or of mobile phone is restricted or prohibited like Libraries and Study room This paper focuses on the design of a cell phone jammer to prevent the usage of mobile communication in restricted areas without interfering with the communication channels outside its range. Interference and jamming severely disrupt the ability to communicate by decreasing the effective signal-to-noise ratio and by making parameter estimation difficult at the receiver. Interference and jamming severely disrupt our ability to communicate by decreasing the effective signal-to-noise ratio and by making parameter estimation difficult at the receiver[5] Similarly with other radio jamming techniques, mobile phone jammer sends the signal (jamming signal) of the same frequency that mobile network use. This causes enough interference with the communication between mobile phones and communicating towers to render the phones unusable. Upon activating mobile jammers, all mobile phones will indicate "NO NETWORK

    The Making of the NEAM Tsunami Hazard Model 2018 (NEAMTHM18)

    Get PDF
    The NEAM Tsunami Hazard Model 2018 (NEAMTHM18) is a probabilistic hazard model for tsunamis generated by earthquakes. It covers the coastlines of the North-eastern Atlantic, the Mediterranean, and connected seas (NEAM). NEAMTHM18 was designed as a three-phase project. The first two phases were dedicated to the model development and hazard calculations, following a formalized decision-making process based on a multiple-expert protocol. The third phase was dedicated to documentation and dissemination. The hazard assessment workflow was structured in Steps and Levels. There are four Steps: Step-1) probabilistic earthquake model; Step-2) tsunami generation and modeling in deep water; Step-3) shoaling and inundation; Step-4) hazard aggregation and uncertainty quantification. Each Step includes a different number of Levels. Level-0 always describes the input data; the other Levels describe the intermediate results needed to proceed from one Step to another. Alternative datasets and models were considered in the implementation. The epistemic hazard uncertainty was quantified through an ensemble modeling technique accounting for alternative models’ weights and yielding a distribution of hazard curves represented by the mean and various percentiles. Hazard curves were calculated at 2,343 Points of Interest (POI) distributed at an average spacing of ∼20 km. Precalculated probability maps for five maximum inundation heights (MIH) and hazard intensity maps for five average return periods (ARP) were produced from hazard curves. In the entire NEAM Region, MIHs of several meters are rare but not impossible. Considering a 2% probability of exceedance in 50 years (ARP≈2,475 years), the POIs with MIH >5 m are fewer than 1% and are all in the Mediterranean on Libya, Egypt, Cyprus, and Greece coasts. In the North-East Atlantic, POIs with MIH >3 m are on the coasts of Mauritania and Gulf of Cadiz. Overall, 30% of the POIs have MIH >1 m. NEAMTHM18 results and documentation are available through the TSUMAPS-NEAM project website (http://www.tsumaps-neam.eu/), featuring an interactive web mapper. Although the NEAMTHM18 cannot substitute in-depth analyses at local scales, it represents the first action to start local and more detailed hazard and risk assessments and contributes to designing evacuation maps for tsunami early warning.publishedVersio

    Cereal yield gaps across Europe

    Get PDF
    peer-reviewedEurope accounts for around 20% of the global cereal production and is a net exporter of ca. 15% of that production. Increasing global demand for cereals justifies questions as to where and by how much Europe’s production can be increased to meet future global market demands, and how much additional nitrogen (N) crops would require. The latter is important as environmental concern and legislation are equally important as production aims in Europe. Here, we used a country-by-country, bottom-up approach to establish statistical estimates of actual grain yield, and compare these to modelled estimates of potential yields for either irrigated or rainfed conditions. In this way, we identified the yield gaps and the opportunities for increased cereal production for wheat, barley and maize, which represent 90% of the cereals grown in Europe. The combined mean annual yield gap of wheat, barley, maize was 239 Mt, or 42% of the yield potential. The national yield gaps ranged between 10 and 70%, with small gaps in many north-western European countries, and large gaps in eastern and south-western Europe. Yield gaps for rainfed and irrigated maize were consistently lower than those of wheat and barley. If the yield gaps of maize, wheat and barley would be reduced from 42% to 20% of potential yields, this would increase annual cereal production by 128 Mt (39%). Potential for higher cereal production exists predominantly in Eastern Europe, and half of Europe’s potential increase is located in Ukraine, Romania and Poland. Unlocking the identified potential for production growth requires a substantial increase of the crop N uptake of 4.8 Mt. Across Europe, the average N uptake gaps, to achieve 80% of the yield potential, were 87, 77 and 43 kg N ha−1 for wheat, barley and maize, respectively. Emphasis on increasing the N use efficiency is necessary to minimize the need for additional N inputs. Whether yield gap reduction is desirable and feasible is a matter of balancing Europe’s role in global food security, farm economic objectives and environmental targets.We received financial contributions from the strategic investment funds (IPOP) of Wageningen University & Research, Bill & Melinda Gates Foundation, MACSUR under EU FACCE-JPI which was funded through several national contributions, and TempAg (http://tempag.net/)

    Quality indicators for patients with traumatic brain injury in European intensive care units

    Get PDF
    Background: The aim of this study is to validate a previously published consensus-based quality indicator set for the management of patients with traumatic brain injury (TBI) at intensive care units (ICUs) in Europe and to study its potential for quality measur

    Changing care pathways and between-center practice variations in intensive care for traumatic brain injury across Europe

    Get PDF
    Purpose: To describe ICU stay, selected management aspects, and outcome of Intensive Care Unit (ICU) patients with traumatic brain injury (TBI) in Europe, and to quantify variation across centers. Methods: This is a prospective observational multicenter study conducted across 18 countries in Europe and Israel. Admission characteristics, clinical data, and outcome were described at patient- and center levels. Between-center variation in the total ICU population was quantified with the median odds ratio (MOR), with correction for case-mix and random variation between centers. Results: A total of 2138 patients were admitted to the ICU, with median age of 49 years; 36% of which were mild TBI (Glasgow Coma Scale; GCS 13–15). Within, 72 h 636 (30%) were discharged and 128 (6%) died. Early deaths and long-stay patients (> 72 h) had more severe injuries based on the GCS and neuroimaging characteristics, compared with short-stay patients. Long-stay patients received more monitoring and were treated at higher intensity, and experienced worse 6-month outcome compared to short-stay patients. Between-center variations were prominent in the proportion of short-stay patients (MOR = 2.3, p < 0.001), use of intracranial pressure (ICP) monitoring (MOR = 2.5, p < 0.001) and aggressive treatme

    Machine learning algorithms performed no better than regression models for prognostication in traumatic brain injury

    Get PDF
    Objective: We aimed to explore the added value of common machine learning (ML) algorithms for prediction of outcome for moderate and severe traumatic brain injury. Study Design and Setting: We performed logistic regression (LR), lasso regression, and ridge regression with key baseline predictors in the IMPACT-II database (15 studies, n = 11,022). ML algorithms included support vector machines, random forests, gradient boosting machines, and artificial neural networks and were trained using the same predictors. To assess generalizability of predictions, we performed internal, internal-external, and external validation on the recent CENTER-TBI study (patients with Glasgow Coma Scale <13, n = 1,554). Both calibration (calibration slope/intercept) and discrimination (area under the curve) was quantified. Results: In the IMPACT-II database, 3,332/11,022 (30%) died and 5,233(48%) had unfavorable outcome (Glasgow Outcome Scale less than 4). In the CENTER-TBI study, 348/1,554(29%) died and 651(54%) had unfavorable outcome. Discrimination and calibration varied widely between the studies and less so between the studied algorithms. The mean area under the curve was 0.82 for mortality and 0.77 for unfavorable outcomes in the CENTER-TBI study. Conclusion: ML algorithms may not outperform traditional regression approaches in a low-dimensional setting for outcome prediction after moderate or severe traumatic brain injury. Similar to regression-based prediction models, ML algorithms should be rigorously validated to ensure applicability to new populations

    Variation in Structure and Process of Care in Traumatic Brain Injury: Provider Profiles of European Neurotrauma Centers Participating in the CENTER-TBI Study.

    Get PDF
    INTRODUCTION: The strength of evidence underpinning care and treatment recommendations in traumatic brain injury (TBI) is low. Comparative effectiveness research (CER) has been proposed as a framework to provide evidence for optimal care for TBI patients. The first step in CER is to map the existing variation. The aim of current study is to quantify variation in general structural and process characteristics among centers participating in the Collaborative European NeuroTrauma Effectiveness Research in Traumatic Brain Injury (CENTER-TBI) study. METHODS: We designed a set of 11 provider profiling questionnaires with 321 questions about various aspects of TBI care, chosen based on literature and expert opinion. After pilot testing, questionnaires were disseminated to 71 centers from 20 countries participating in the CENTER-TBI study. Reliability of questionnaires was estimated by calculating a concordance rate among 5% duplicate questions. RESULTS: All 71 centers completed the questionnaires. Median concordance rate among duplicate questions was 0.85. The majority of centers were academic hospitals (n = 65, 92%), designated as a level I trauma center (n = 48, 68%) and situated in an urban location (n = 70, 99%). The availability of facilities for neuro-trauma care varied across centers; e.g. 40 (57%) had a dedicated neuro-intensive care unit (ICU), 36 (51%) had an in-hospital rehabilitation unit and the organization of the ICU was closed in 64% (n = 45) of the centers. In addition, we found wide variation in processes of care, such as the ICU admission policy and intracranial pressure monitoring policy among centers. CONCLUSION: Even among high-volume, specialized neurotrauma centers there is substantial variation in structures and processes of TBI care. This variation provides an opportunity to study effectiveness of specific aspects of TBI care and to identify best practices with CER approaches

    Mechanical Properties and Microstructural Characterization of Aged Nickel-based Alloy 625 Weld Metal

    Get PDF
    The aim of this work was to evaluate the different phases formed during solidification and after thermal aging of the as-welded 625 nickel-based alloy, as well as the influence of microstructural changes on the mechanical properties. The experiments addressed aging temperatures of 650 and 950 A degrees C for 10, 100, and 200 hours. The samples were analyzed by electron microscopy, microanalysis, and X-ray diffraction in order to identify the secondary phases. Mechanical tests such as hardness, microhardness, and Charpy-V impact test were performed. Nondestructive ultrasonic inspection was also conducted to correlate the acquired signals with mechanical and microstructural properties. The results show that the alloy under study experienced microstructural changes when aged at 650 A degrees C. The aging was responsible by the dissolution of the Laves phase formed during the solidification and the appearance of gamma aEuro(3) phase within interdendritic region and fine carbides along the solidification grain boundaries. However, when it was aged at 950 A degrees C, the Laves phase was continuously dissolved and the excess Nb caused the precipitation of the delta-phase (Ni3Nb), which was intensified at 10 hours of aging, with subsequent dissolution for longer periods such as 200 hours. Even when subjected to significant microstructural changes, the mechanical properties, especially toughness, were not sensitive to the dissolution and/or precipitation of the secondary phases
    corecore