18 research outputs found
From fixed to mobile femtocells in LTE systems: issues and challenges
This paper investigates the concept of Mobile Femtocell which is the extension of implementing Mobile Relays and Fixed Femtocells. Mobile Femtocells can be deployed in public transportation vehicles such as trains, buses or private cars that form its own cell inside vehicles to serve vehicular and mobile User Equipments. This study intends to help cell-edge users to have better signal strength. An investigation into Long Term Evolution cell-edge users' performance is being conducted by investigating the deployment of Mobile Femtocells in LTE system. The throughput for cell edge users can be improved by deploying Fixed/Mobile Femtocells. This paper is considering several scenarios namely; Fixed Femtocells with Fixed users, Mobile Femtocells with fixed users, Fixed Femtocells with mobile users and Finally Mobile Femtocells with mobile users. The achieved results via Matlab simulation showed that Mobile Femtocells' users have enjoyed better Quality of Services than Fixed Femtocells' users. The improved performance has been noticed through the improvement of the Mobile Femtocells UEs' spectral efficiency, throughput and SINR over the Fixed Femtocells' users
Factors influencing cloud computing adoption in Yemen higher education institutions
Cloud-based technology, which is now well established, helps reducing costs and providing accessibility, reliability and flexibility. However, the Yemen Higher educational system has not yet embraced cloud computing due to security and privacy concerns, lack of trust, negative cultural attitudes (i.e. tribalism), and most importantly, little digital devices experience in educational settings as well as lack of knowledge and technical know-how. Thus, this study proposes a conceptual model of cloud computing (CC) adoption in Yemen HEIs by investigating the influence of Technology, Organization and Environment (TOE) factors. In addition, this study investigates the moderating effect of tribalism culture in the relationships between the identified factors and CC adoption. The study employed both qualitative and quantitative approaches. A preliminary study through semi-structured interviews with ten (10) participants from top management of HEIs to refine and confirm the proposed model. The quantitative
approach was used to determine the factors that influence CC adoption in Yemen HEIs through a questionnaire survey. Data were collected from 328 respondents in 38 HEIs and analyzed using Partial Least Square (PLS) Structural Equation Modelling (SEM). The results showed that relative advantage, reliability, compatibility, security, technology readiness, top management support, regulatory policy and competitive pressure have a positive significant impact on CC adoption. However, tribalism culture has a negative significant impact towards CC adoption. The study also found that tribalism culture moderates the relationship between compatibility, reliability, security, relative advantage, regulatory policy and CC adoption. This study contributes to TOE model adoption by including the cultural factor as a moderator towards CC adoption in Yemen HEIs. The study also provides a model and insights for HEIs, technology consultants, vendors and policy makers in better understanding of the factors that influence CC adoption in least developed countries (LDCs), specifically, Yemen
Intelligent evacuation management systems: A review
Crowd and evacuation management have been active areas of research and study in the recent past. Various developments continue to take place in the process of efficient evacuation of crowds in mass gatherings. This article is intended to provide a review of intelligent evacuation management systems covering the aspects of crowd monitoring, crowd disaster prediction, evacuation modelling, and evacuation path guidelines. Soft computing approaches play a vital role in the design and deployment of intelligent evacuation applications pertaining to crowd control management. While the review deals with video and nonvideo based aspects of crowd monitoring and crowd disaster prediction, evacuation techniques are reviewed via the theme of soft computing, along with a brief review on the evacuation navigation path. We believe that this review will assist researchers in developing reliable automated evacuation systems that will help in ensuring the safety of the evacuees especially during emergency evacuation scenarios
Recommended from our members
Neural network design for intelligent mobile network optimisation
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonThe mobile networks users’ demands for data services are increasing exponentially, this is due to two main factors: the first is the evolution of smart phones and their application, and the second is the emerging new technologies for internet of things, smart cities…etc, which keeps pumping more data into the network; ‘though most of the data routed in the current mobile network is non-live data’. This increasing of demands arise the necessity for the mobile network operators to keep improving their network to satisfy it, this improvement takes place via adding hardware or increasing the resources or a combination of both. The radio resources are strictly limited due to spectrum licensing and availability, therefore efficient spectrum utilization is a major goal to be achieved for both network operators and developers. Simultaneous and multiple channel access,and adding more cells to the network are ways used to increase the data exchanged between the network nodes. The current 4G mobile system is based on the Orthogonal Frequency Division Multiple Access (OFDMA) for accessing the medium and the intercell interference degrades the link quality at the cell edge, with the introduction of heterogeneity concept to the LTE in Release 10 of the 3GPP the handover process became even more complex. To mitigate the intercell interference at the cell edge, coordinated multipoint and carrier aggregation techniques are utilized for dual connectivity. This work is focused on designing and proposing enhancing features to improve network performance and sustainability, these features comprises of distributing small cells for data only transmission, handover schemes performance evaluation at cell edge with dual connectivity, and Artificial Intelligence technology for balancing and prediction. In the proposed model design the data and controls of the Small eNodeB (SeNodeB) are processed at the network edge using a Mobile Edge Computing (MEC) server and the SeNodeBs are used to boost services provided to the users, also the concept of caching data has been investigated, the caching units where implemented in different network levels. The proposed system and resource management are simulated using the OPNET modeller and evaluated through multiple scenarios with and without full load, the UE is reconfigured to accommodate dual connectivity and have two separate connections for uplink and downlink, while maintaining connection to the Macro cell via uplink, the downlink is dedicated for small cells when content is requested from the cache. The results clearly show that the proposed system can decrease the latency while the total throughput delivered by the network has highly improved when SeNodeBs are deployed in the system, rising throughput will incur the rise of overall capacity which leads to better services being provided to the users or more users to join and benefit from the network. Handover improvement is also considered in this work, with the help of two Artificial Intelligence (AI) entities better handover performance are achieved. Balanced load over the SeNodeBs results in less frequent handover, the proposed load balancer is based on artificial neural network clustering model with self-organizing map as a hidden layer, it’s trained to forecast the network condition and learn to reduce the number of handovers especially for the UEs at the cell edge by performing only necessary ones, and avoid handovers to the Macro cell for the downlink direction. The examined handovers concern the downlinks when routing non live video stored at the small cell’s cache, and a reduction in the frequent handovers was achieved when running the balancer. Keep revolving in the handover orbit, another way to preserve and utilize network resources is by predicting the handovers before they occur, and allocate the required data in the target SeNodeB, the predictor entity in the proposed system architecture combines the features of Radial Basis Function Neural Network and neural network time series tool to create and update prediction list from the system’s collected data and learn to predict the next SeNodeB to associate with. The prediction entity is simulated using MATLAB, and the results shows that the system was able to deliver up to 92% correct predictions for handovers which led to overall throughput improvement of 75%
Estimating Footfall From Passive Wi-Fi Signals: Case Study with Smart Street Sensor Project
Measuring the distribution and dynamics of the population at granular level both spatially and temporally is crucial for understanding the structure and function of the built environment. In this era of big data, there have been numerous attempts to undertake this using the preponderance of unstructured, passive and incidental digital data which are generated from day-to-day human activities. In attempts to collect, analyse and link these widely available datasets at a massive scale, it is easy to put the privacy of the study subjects at risk. This research looks at one such data source - Wi-Fi probe requests generated by mobile devices - in detail, and processes it into granular, long-term information on number of people on the retail high streets of the United Kingdom (UK). Though this is not the first study to use this data source, the thesis specifically targets and tackles the uncertainties introduced in recent years by the implementation of features designed to protect the privacy of the users of Wi-Fi enabled mobile devices. This research starts with the design and implementation of multiple experiments to examine Wi-Fi probe requests in detail, then later describes the development of a data collection methodology to collect multiple sets of probe requests at locations across London. The thesis also details the uses of these datasets, along with the massive dataset generated by the ‘Smart Street Sensor’ project, to devise novel data cleaning and processing methodologies which result in the generation of a high quality dataset which describes the volume of people on UK retail high streets with a granularity of 5 minute intervals since August 2015 across 1000 locations (approx.) in 115 towns. This thesis also describes the compilation of a bespoke ‘Medium data toolkit’ for processing Wi-Fi probe requests (or indeed any other data with a similar size and complexity). Finally, the thesis demonstrates the value and possible applications of such footfall information through a series of case studies. By successfully avoiding the use of any personally identifiable information, the research undertaken for this thesis also demonstrates that it is feasible to prioritise the privacy of users while still deriving detailed and meaningful insights from the data generated by the users
Recommended from our members
Cognitive MAC protocols for mobile Ad-Hoc networks
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.The term of Cognitive Radio (CR) used to indicate that spectrum radio could be accessed dynamically and opportunistically by unlicensed users. In CR Networks, Interference between nodes, hidden terminal problem, and spectrum sensing errors are big issues to be widely discussed in the research field nowadays. To improve the performance of such kind of networks, this thesis proposes Cognitive Medium Access Control (MAC) protocols for Mobile Ad-Hoc Networks (MANETs). From the concept of CR, this thesis has been able to develop a cognitive MAC framework in which a cognitive process consisting of cognitive elements is considered, which can make efficient decisions to optimise the CR network. In this context, three different scenarios to maximize the secondary user's throughput have been proposed. We found that the throughput improvement depends on the transition probabilities. However, considering the past information state of the spectrum can dramatically increases the secondary user's throughput by up to 40%. Moreover, by increasing the number of channels, the throughput of the network can be improved about 25%. Furthermore, to study the impact of Physical (PHY) Layer errors on cognitive MAC layer in MANETs, in this thesis, a Sensing Error-Aware MAC protocols for MANETs has been proposed. The developed model has been able to improve the MAC layer performance under the challenge of sensing errors. In this context, the proposed model examined two sensing error probabilities: the false alarm probability and the missed detection probability. The simulation results have shown that both probabilities could be adapted to maintain the false alarm probability at certain values to achieve good results. Finally, in this thesis, a cooperative sensing scheme with interference mitigation for Cognitive Wireless Mesh Networks (CogMesh) has been proposed. Moreover, a prioritybased traffic scenario to analyze the problem of packet delay and a novel technique for dynamic channel allocation in CogMesh is presented. Considering each channel in the system as a sub-server, the average delay of the users' packets is reduced and the cooperative sensing scenario dramatically increases the network throughput 50% more as the number of arrival rate is increased
Framework for Automated Functional Tests within Value-Added Service Environments
Full version unavailable due to 3rd party copyright restrictions.Recent years have witnessed that standard telecommunication services evolved more and more to next generation value-added services. This fact is accompanied by a change of service characteristics as new services are designed to fulfil the customer’s demands instead of just focussing on technologies and protocols. These demands can be very specific and, therefore, diverse potential service functionalities have to be considered by the service providers. To make matters worse for service providers, a fast transition from concept to market product and low price of a new service is required due to the increasing competition in the telecommunication industry. Therefore, effective test solutions need to be developed that can be integrated in current value-added service development life-cycles. Besides, these solutions should support the involvement of all participating stakeholders such as the service provider, the test developers as well as the service developers, and, in order to consider an agile approach, also the service customer.
This thesis proposes a novel framework for functional testing that is based on a new sort of description language for value-added services (Service Test Description). Based on instances of the Service Test Description, sets of reusable test components described by means of an applied Statecharts notation are automatically selected and composed to so-called behaviour models. From the behaviour models, abstract test cases can be automatically generated which are then transformed to TTCN-3 test cases and then assembled to an Executable Test Suite. Within a TTCN-3 test system, the Executable Test Suite can be executed against the corresponding value-added service referred to as System Under Test. One benefit of the proposed framework is its application within standard development life-cycles. Therefore, the thesis presents a methodology that considers both service development and test development as parallel tasks and foresees procedures to synchronise the tasks and to allow an agile approach with customer involvement.
The novel framework is validated through a proof-of-concept working prototype. Example value-added services have been chosen to illustrate the whole process from compiling instances of the Service Test Description until the execution of automated tests.
Overall, this thesis presents a novel solution for service providers to improve the quality of their provided value-added services through automated functional testing procedures. It enables the early involvement of the customers into the service development life-cycle and also helps test developers and service developers to collaborate
Recommended from our members
Small cells deployment for traffic handling in centralized heterogeneous network
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonAs the next phase of mobile technology, 5G is coming with a new vision that is characterized by a connected society, in which everything will be effectively connected, providing a variety of services and diverse business models that require more than just higher data rates and more capacity to target new kinds of ultra-reliable and flexible connection. However, next generation of applications, services and use cases will have extreme variation in requirements which in turn amplified the demand on the network resources. Therefore, 5G will require a whole new design that take into consideration efficient resource management and utilisation. An observation that was made throughout this research refers to the demand for more capacity, reduced latency, and increased density as common factors of many of the next generation use cases. This inescapably implies that the use of small cells is an ideal solution for next generation applications requirements, provided that the necessary storage and computing resources need to be distributed closer to the actual user. In this context, this research proposed an architecture of a centralised heterogenous network, consisting of Macro and Small cells with storage and computing resources, all controlled by a centralized functionality embedded within a gateway at the edge of the network. Compared to the basic network, the proposed solutions have been proven to provide overall system performance enhancement. This involves extending the system by adding small cells to serve dedicated services for User Equipment (UE) with dual connectivity from local server which reduces the overall system delay while increasing the overall system throughput. The added centralized mobility management was proven to be capable of tracing the mobility of the UEs within the system coverage, by keeping one connection with the main cell while moving between small cells resulting in enhancement to the handover delay by 11% without service interruptions. Finally, the proposed slicing model demonstrated the system’s ability to provide different levels of services to users based on different Quality of Service (QoS) requirements and to differentiate between various applications without affecting the performance of other services, benefiting from more flexible infrastructure than the traditional network. In addition, a 50% improvement in the performance was observed in terms of the CPU utilization. In such architecture, the required capacity can be added exactly where it is needed and when it is needed, coverage problems can be directly addressed, higher throughput, lower latency, and efficient mobility management can be achieved as a result of efficient resource management and distribution which is one of key factors in the deployment of next generation mobile network system
An integrated security Protocol communication scheme for Internet of Things using the Locator/ID Separation Protocol Network
Internet of Things communication is mainly based on a machine-to-machine pattern, where devices are globally addressed and identified. However, as the number of connected devices increase, the burdens on the network infrastructure increase as well. The major challenges are the size of the routing tables and the efficiency of the current routing protocols in the Internet backbone. To address these problems, an Internet Engineering Task Force (IETF) working group, along with the research group at Cisco, are still working on the Locator/ID Separation Protocol as a routing architecture that can provide new semantics for the IP addressing, to simplify routing operations and improve scalability in the future of the Internet such as the Internet of Things. Nonetheless, The Locator/ID Separation Protocol is still at an early stage of implementation and the security Protocol e.g. Internet Protocol Security (IPSec), in particular, is still in its infancy.
Based on this, three scenarios were considered: Firstly, in the initial stage, each Locator/ID Separation Protocol-capable router needs to register with a Map-Server. This is known as the Registration Stage. Nevertheless, this stage is vulnerable to masquerading and content poisoning attacks. Secondly, the addresses resolving stage, in the Locator/ID Separation Protocol the Map Server (MS) accepts Map-Request from Ingress Tunnel Routers and Egress Tunnel Routers. These routers in trun look up the database and return the requested mapping to the endpoint user. However, this stage lacks data confidentiality and mutual authentication. Furthermore, the Locator/ID Separation Protocol limits the efficiency of the security protocol which works against
redirecting the data or acting as fake routers. Thirdly, As a result of the vast increase in the different Internet of Things devices, the interconnected links between these devices increase vastly as well. Thus, the communication between the devices can be easily exposed to disclosures by attackers such as Man in the Middle Attacks (MitM) and Denial of Service Attack (DoS).
This research provided a comprehensive study for Communication and Mobility in the Internet of Things as well as the taxonomy of different security protocols. It went on to investigate the security threats and vulnerabilities of Locator/ID Separation Protocol using X.805 framework standard. Then three Security protocols were provided to secure the exchanged transitions of communication in Locator/ID Separation Protocol. The first security protocol had been implemented to secure the Registration stage of Locator/ID separation using ID/Based cryptography method. The second security protocol was implemented to address the Resolving stage in the Locator/ID Separation Protocol between the Ingress Tunnel Router and Egress Tunnel Router using Challenge-Response authentication and Key Agreement technique. Where, the third security protocol had been proposed, analysed and evaluated for the Internet of Things communication devices. This protocol was based on the authentication and the group key agreement via using the El-Gamal concept. The developed protocols set an interface between each level of the phase to achieve security refinement architecture to Internet of Things based on Locator/ID Separation Protocol. These protocols were verified using Automated Validation Internet Security Protocol and Applications (AVISPA) which is a push button tool for the automated validation of security protocols and achieved results demonstrating that they do not have any security flaws. Finally, a performance analysis of security refinement protocol analysis and an evaluation were conducted using Contiki and Cooja simulation tool. The results of the performance analysis showed that the security refinement was highly scalable and the memory was quite efficient as it needed only 72 bytes of memory to store the keys in the Wireless Sensor Network (WSN) device
Adoption of cloud computing technology for exploration, drilling and production activities: Nigerian upstream oil and gas industry
A thesis submitted in partial fulfilment of the requirements of the University of Wolverhampton for the degree of Doctor of Philosophy.The upstream oil and gas industry, which identifies and produces oil and gas, is essential for the generation of energy. This sector has a fragmented pattern of activities and uses real-time information and accurate results for faster and more accurate decision making. Cloud computing offers information technology (IT) services via the internet and the technology offers several benefits such as flexibility, scalability, cost reduction, real-time information, monitoring, collaboration and timely interpretation of exploration and production data. However, the cloud has not yet penetrated the upstream oil and gas sector. Generally, the adoption of cloud computing in the oil and gas industry is less discussed in academia, let alone the upstream oil and gas sector. This research aims to study the adoption of cloud computing in the upstream oil and gas industry, particularly in Nigeria, which is an emerging economy. The decision to adopt cloud computing for exploration, drilling and production is a complex process. A major outcome of the research is the development of a model consisting of factors influencing decision to adopt cloud technology in the upstream oil and gas sector. In addition, the study develops a prototype decision support system (DSS) based on analytical hierarchy process (AHP) which enables decision makers to select an appropriate cloud service model. The developed prototype DSS is described in appendix A.
This research adopted a mixed method approach comprising of semi-structured interview which was used to collect qualitative data using NVivo 11 software, and a questionnaire survey which was used to collect quantitative data which was analysed using the Analysis of a Moment Structures (AMOS) based structural equation modelling technique.
The findings of this research confirmed the significant factors for cloud computing adoption for exploration, drilling and production activities. This research has both theoretical and practical implications which reinforce the need for cloud technology adoption in the upstream oil and gas sector. In addition, using the research findings to develop a prototype Decision Support System (DSS) is innovative and would be useful to the Nigerian government, cloud service providers and the upstream oil and gas sector. Finally, the study makes recommendations for upstream O&G based on the findings of the study.Nigerian Petroleum Technology Development Fund (PTDF)