14 research outputs found

    Adaptive Health Monitoring Using Aggregated Energy Readings from Smart Meters

    Get PDF
    Worldwide, the number of people living with self-limiting conditions, such as Dementia, Parkinson’s disease and depression, is increasing. The resulting strain on healthcare resources means that providing 24-hour monitoring for patients is a challenge. As this problem escalates, caring for an ageing population will become more demanding over the next decade, and the need for new, innovative and cost effective home monitoring technologies are now urgently required. The research presented in this thesis directly proposes an alternative and cost effective method for supporting independent living that offers enhancements for Early Intervention Practices (EIP). In the UK, a national roll out of smart meters is underway. Energy suppliers will install and configure over 50 million smart meters by 2020. The UK is not alone in this effort. In other countries such as Italy and the USA, large scale deployment of smart meters is in progress. These devices enable detailed around-the-clock monitoring of energy usage. Specifically, each smart meter records accurately the electrical load for a given property at 10 second intervals, 24 hours a day. This granular data captures detailed habits and routines through user interactions with electrical devices. The research presented in this thesis exploits this infrastructure by using a novel approach that addresses the limitations associated with current Ambient Assistive Living technologies. By applying a novel load disaggregation technique and leveraging both machine learning and cloud computing infrastructure, a comprehensive, nonintrusive and personalised solution is achieved. This is accomplished by correlating the detection of individual electrical appliances and correlating them with an individual’s Activities of Daily Living. By utilising a random decision forest, the system is able to detect the use of 5 appliance types from an aggregated load environment with an accuracy of 96%. By presenting the results as vectors to a second classifier both normal and abnormal patient behaviour is detected with an accuracy of 92.64% and a mean squared error rate of 0.0736 using a random decision forest. The approach presented in this thesis is validated through a comprehensive patient trial, which demonstrates that the detection of both normal and abnormal patient behaviour is possible

    On-demand security and QoS optimization in mobile ad hoc networks

    Get PDF
    Scope and Method of Study: Security often comes with overhead that will impact link Quality of Service (QoS) performance. In this dissertation, we propose an on-demand security and QoS optimization architecture in mobile ad hoc networks that automatically adapts network security level to changes in network topology, traffic condition, and link QoS requirements, so as to keep the security and QoS at optimum conditions. In order to achieve the overall objective, we introduce three basic frameworks: a policy based plug-in security framework, a multi-layer QoS guided routing algorithm, and a Proportional Integral Derivative (PID) feedback control based security and QoS optimization framework. The research has been evaluated with the network simulator ns-2. Finally, we propose an attack tree and state machine based security evaluation mechanism for ad hoc networks: a new security measurement metric.Findings and Conclusions: Simulations have been done for small and large network sizes, low and high communication ratios, as well as low and high mobility scenarios. The simulations show that the proposed on-demand security and QoS optimization architecture can produce similar performance to non-secure QoS routing protocol under various traffic loads. It provides more secure ad hoc networks without compromising the QoS performance, especially under light and medium traffic conditions

    The Use of digital games to enhance the physical exercise activity of the elderly : a case of Finland

    Get PDF
    According to the World Health Organization (WHO), population ageing is a global phenomenon, which brings both challenges and opportunities for society. The current longer expected lifespan can create opportunities for the elderly to contribute in many ways to their families and communities. However, it greatly depends on their quality of life, which is affected by many factors, including physical and functional health, social well-being, and cognitive abilities. The WHO (2012) states that physical health is one of the indicators for the elderly’s quality of life, and it declines with increasing age. Participation in regular physical exercises can help the elderly improve their physical and mental health, and this has been aided by the use of modern technologies to promote the elderly’s physical and functional health. Of these latest technologies, digital games have shown promise to improve and enhance the elderly’s physical activities through fun and engaging gameplay. The literature highlights that some commercial games in the market (e.g. Microsoft Kinect- Sports and Nintendo Wii Sports games) have the potential to improve the elderly’s physical health such as gait, balance, and fall prevention. However, researchers argue that these commercial games are not designed specifically for the elderly and their physical exercise activities. They state that most commercial games are not user-friendly for the elderly whose functional and physical abilities are limited due to their advanced years. The literature points out that more studies need to be undertaken to understand the usability and usefulness of digital games for physical exercise activities so that game designers can create elderly-friendly digital games in the future. In Finland, the government has been focusing on promoting healthy ageing and increasing home care services for the elderly. In recent years, Finnish researchers have used digital games to promote older Finns’ healthy and active ageing. The existing literature, whilst showing the potential of digital games for elderly Finns’ physical health, also acknowledges further research is needed particularly in the context of Finland. Thus, in this study, we aimed at investigating digital games to specifically assess their applications for older Finns’ physical activities, focusing on the quality of users’ experiences, and their reported ease of use and perceived usefulness. We used the mixed methods approach, which applies both qualitative and quantitative research methods. The study design included four stages: requirements gathering, analysis and design, prototyping, and evaluation. Firstly, we conducted pre-studies to elicit users’ requirements. This was followed by the analysis of the resulting data to identify trends and patterns, which fuelled ideas in the brainstorming game design and development phases. The final product was a digital game-based physical exercise called the Skiing Game. We then evaluated the Skiing Game in Finland with 21 elderly Finns (M=7, F=14, Average Age =76). By using questionnaires, observation, and interviews, we investigated user experiences, focusing on the game’s usability, and usefulness for enhancing the physical activity and wellbeing of the elderly. We also conducted a comparative test of the Skiing Game in Japan with 24 elderly Japanese participants (M=12, F=12, Average Age = 72) to further understand non-Finnish elderly users’ experiences. The findings from the usability study of the Skiing Game in Finland demonstrated that elderly Finns had a positive experience in the gameplay, and their motivation was noticeably high. It also confirmed that elderly Finns have a genuine interest in digital game-based exercises and strong intentions to play digital games as a form of physical exercise in the future. Although prior to the study most of them had negative views and misconceptions about digital games, after the gameplay their attitudes were decidedly positive. They acknowledged that whilst playing digital games could be an alternative way of exercising for them their use would primarily be when they don’t have access to their usual non-digital physical exercise. The Japanese usability of the Skiing Game showed that the elderly Japanese people also had positive user experiences in playing digital games, and also intend to use them in the future. Similarly, after playing the game they reported that their attitudes towards digital games become positive, and indicated playing digital games could be an alternative way of exercising. Although the comparison of the two studies suggests that the elderly Finns had relatively more positive experiences whilst playing the Skiing Game, compared to their Japanese counterparts, in general, both groups had a positive experience in the gameplay and showed interest in digital games as an alternative exercise. Based on the usability lessons learned from these two studies, recommendations for practitioners and designers regarding improvements in game design and development are made in this report. Implementing these modifications into future designs and further development of digital games for the elderly will improve their commercial viability and user uptake. The findings from this study can provide valuable insights, particularly for Finnish policymakers and healthcare practitioners who are keen to introduce digital games into the aged-care sector in Finland. The studies have also provided valuable insights into the optimal methods for introducing Finnish digital games to international markets, in particular, digital games tailored specifically for the physical exercise needs and motivations of the elderly. By taking into consideration the limitations of the study, we provide our future studies and further improvements of the game to be conducted

    Inferring Queueing Network Models from High-precision Location Tracking Data

    No full text
    Stochastic performance models are widely used to analyse the performance and reliability of systems that involve the flow and processing of customers. However, traditional methods of constructing a performance model are typically manual, time-consuming, intrusive and labour-intensive. The limited amount and low quality of manually-collected data often lead to an inaccurate picture of customer flows and poor estimates of model parameters. Driven by advances in wireless sensor technologies, recent real-time location systems (RTLSs) enable the automatic, continuous and unintrusive collection of high-precision location tracking data, in both indoor and outdoor environment. This high-quality data provides an ideal basis for the construction of high-fidelity performance models. This thesis presents a four-stage data processing pipeline which takes as input high-precision location tracking data and automatically constructs a queueing network performance model approximating the underlying system. The first two stages transform raw location traces into high-level “event logs” recording when and for how long a customer entity requests service from a server entity. The third stage infers the customer flow structure and extracts samples of time delays involved in the system; including service time, customer interarrival time and customer travelling time. The fourth stage parameterises the service process and customer arrival process of the final output queueing network model. To collect large-enough location traces for the purpose of inference by conducting physical experiments is expensive, labour-intensive and time-consuming. We thus developed LocTrack- JINQS, an open-source simulation library for constructing simulations with location awareness and generating synthetic location tracking data. Finally we examine the effectiveness of the data processing pipeline through four case studies based on both synthetic and real location tracking data. The results show that the methodology performs with moderate success in inferring multi-class queueing networks composed of single-server queues with FIFO, LIFO and priority-based service disciplines; it is also capable of inferring different routing policies, including simple probabilistic routing, class-based routing and shortest-queue routing

    Location dependent key management schemes supported by random selected cell reporters in wireless sensor networks

    Get PDF
    PhD ThesisIn order to secure vital and critical information inside Wireless Sensor Net- works (WSNs), a security requirement of data con dentiality, authenticity and availability should be guaranteed. The leading key management schemes are those that employ location information to generate security credentials. Therefore, this thesis proposes three novel location-dependent key manage- ment schemes. First, a novel Location-Dependent Key Management Protocol for a Single Base Station (LKMP-SBS) is presented. As a location-dependent scheme, the WSN zone is divided virtually into cells. Then, any event report generated by each particular cell is signed by a new type of endorsement called a cell- reporter signature, where cell-reporters are de ned as a set of nodes selected randomly by the BS out of the nodes located within the particular cell. This system is analysed and proved to outperform other schemes in terms of data security requirements. Regarding the data con dentiality, for three values of z (1,2,3) the improvement is 95%, 90% and 85% respectively when 1000 nodes are compromised. Furthermore, in terms of data authenticity an enhancement of 49%, 24%, 12.5% is gained using our approach with z = 1; 2; 3 respectively when half of all nodes are compromised. Finally, the optimum number of cell reporters is extensively investigated related to the security requirements, it is proven to be z = n 2 . The second contribution is the design of a novel Location-Dependent Key Man- agement Protocol for Multiple Base Stations (LKMP-MBS). In this scheme, di erent strategies of handling the WSN by multiple BSs is investigated. Ac- cordingly, the optimality of the scheme is analysed in terms of the number of cell reporters. Both data con dentiality and authenticity have been proven to be / e / 1 N . The optimum number of cell reporters had been calculated as zopt = n 2M , PM `=1 jz(`) optj = n 2M . Moreover, the security robustness of this scheme is analysed and proved to outperform relevant schemes in terms of data con- dentiality and authenticity. Furthermore, in comparison with LKMP-SBS, the adoption of multiple base stations is shown to be signi cantly important in improving the overall system security. The third contribution is the design of the novel Mobility- Enabled, Location- dependant Key Managment Protocol for Multiple BSs (MELKMP-MBS). This scheme presents a key management scheme, which is capable of serving a WSN with mobile nodes. Several types of handover are presented in order to main- tain the mobile node service availability during its movement between two zones in the network. Accordingly, the communication overhead of MELKMP- MBS is analysed, simulated and compared with the overhead of other schemes. Results show a signi cant improvement over other schemes in terms of han- dover e ciency and communication over head. Furthermore, the optimality of WSN design such as the value of N; n is investigated in terms of communi- cation overhead in all protocols and it is shown that the optimum number of nodes in each cell, which cause the minimum communication overhead in the network , is n = 3 p 2N.Ministry of Higher Education and Scienti c Research in Iraq and the Iraqi Cultural Attach e in Londo

    Technology for large space systems: A bibliography with indexes (supplement 07)

    Get PDF
    This bibliography lists 366 reports, articles and other documents introduced into the NASA scientific and technical information system between January 1, 1982 and June 30, 1982. Subject matter is grouped according to systems, interactive analysis and design, structural concepts, control systems, electronics, advanced materials, assembly concepts, propulsion, solar power satellite systems, and flight experiments

    Technology for large space systems: A bibliography with indexes (supplement 17)

    Get PDF
    This bibliography lists 512 reports, articles, and other documents introduced into the NASA scientific and technical information system between January 1, 1987 and June 30, 1987. Its purpose is to provide helpful information to the researcher, manager, and designer in technology development and mission design according to system, interactive analysis and design, structural and thermal analysis and design, structural concepts and control systems, electronics, advanced materials, assembly concepts, propulsion, and solar power satellite systems

    MSL Framework: (Minimum Service Level Framework) for Cloud Providers and Users

    Get PDF
    Cloud Computing ensures parallel computing and emerged as an efficient technology to meet the challenges of rapid growth of data that we experienced in this Internet age. Cloud computing is an emerging technology that offers subscription based services, and provide different models such as IaaS, PaaS and SaaS among other models to cater the needs of different user groups. The technology has enormous benefits but there are serious concerns and challenges related to lack of uniform standards or nonexistence of minimum benchmark for level of services offered across the industry to provide an effective, uniform and reliable service to the cloud users. As the cloud computing is gaining popularity, organizations and users are having problems to adopt the service ue to lack of minimum service level framework which can act as a benchmark in the selection of the cloud provider and provide quality of service according to the user’s expectations. The situation becomes more critical due to distributed nature of the service provider which can be offering service from any part of the world. Due to lack of minimum service level framework that will act as a benchmark to provide a uniform service across the industry there are serious concerns raised recently interms of security and data privacy breaches, authentication and authorization issues, lack of third party audit and identity management problems, integrity, confidentiality and variable data availability standards, no uniform incident response and monitoring standards, interoperability and lack of portability standards, identity management issues, lack of infrastructure protection services standards and weak governance and compliance standards are major cause of concerns for cloud users. Due to confusion and absence of universal agreed SLAs for a service model, different quality of services is being provided across the cloud industry. Currently there is no uniform performance model agreed by all stakeholders; which can provide performance criteria to measure, evaluate, and benchmark the level of services offered by various cloud providers in the industry. With the implementation of General Data Protection Regulation (GDPR) and demand from cloud users to have Green SLAs that provides better resource allocations mechanism, there will be serious implications for the cloud providers and its consumers due to lack of uniformity in SLAs and variable standards of service offered by various cloud providers. This research examines weaknesses in service level agreements offered by various cloud providers and impact due to absence of uniform agreed minimum service level framework on the adoption and usage of cloud service. The research is focused around higher education case study and proposes a conceptual model based on uniform minimum service model that acts as benchmark for the industry to ensure quality of service to the cloud users in the higher education institution and remove the barriers to the adoption of cloud technology. The proposed Minimum Service Level (MSL) framework, provides a set of minimum and uniform standards in the key concern areas raised by the participants of HE institution which are essential to the cloud users and provide a minimum quality benchmark that becomes a uniform standard across the industry. The proposed model produces a cloud computing implementation evaluation criteria which is an attempt to reduce the adoption barrier of the cloud technology and set minimum uniform standards followed by all the cloud providers regardless of their hosting location so that their performance can be measured, evaluated and compared across the industry to improve the overall QoS (Quality of Service) received by the cloud users, remove the adoption barriers and concerns of the cloud users and increase the competition across the cloud industry.A computação em nuvem proporciona a computação paralela e emergiu como uma tecnologia eficiente para enfrentar os desafios do crescimento rápido de dados que vivemos na era da Internet. A computação em nuvem é uma tecnologia emergente que oferece serviços baseados em assinatura e oferece diferentes modelos como IaaS, PaaS e SaaS, entre outros modelos para atender as necessidades de diferentes grupos de utilizadores. A tecnologia tem enormes benefícios, mas subsistem sérias preocupações e desafios relacionados com a falta de normas uniformes ou inexistência de um referencial mínimo para o nível de serviços oferecidos, na indústria, para proporcionar uma oferta eficaz, uniforme e confiável para os utilizadores da nuvem. Como a computação em nuvem está a ganhar popularidade, tanto organizações como utilizadores estão enfrentando problemas para adotar o serviço devido à falta de enquadramento de nível de serviço mínimo que possa agir como um ponto de referência na seleção de provedor da nuvem e fornecer a qualidade dos serviços de acordo com as expectativas do utilizador. A situação torna-se mais crítica, devido à natureza distribuída do prestador de serviço, que pode ser oriundo de qualquer parte do mundo. Devido à falta de enquadramento de nível de serviço mínimo que irá agir como um benchmark para fornecer um serviço uniforme em toda a indústria, existem sérias preocupações levantadas recentemente em termos de violações de segurança e privacidade de dados, autenticação e autorização, falta de questões de auditoria de terceiros e problemas de gestão de identidade, integridade, confidencialidade e disponibilidade de dados, falta de uniformidade de normas, a não resposta a incidentes e o monitoramento de padrões, a interoperabilidade e a falta de padrões de portabilidade, questões relacionadas com a gestão de identidade, falta de padrões de serviços de proteção das infraestruturas e fraca governança e conformidade de padrões constituem outras importantes causas de preocupação para os utilizadores. Devido à confusão e ausência de SLAs acordados de modo universal para um modelo de serviço, diferente qualidade de serviços está a ser fornecida através da nuvem, pela indústria da computação em nuvem. Atualmente, não há desempenho uniforme nem um modelo acordado por todas as partes interessadas; que pode fornecer critérios de desempenho para medir, avaliar e comparar o nível de serviços oferecidos por diversos fornecedores de computação em nuvem na indústria. Com a implementação do Regulamento Geral de Protecção de Dados (RGPD) e a procura da nuvem com base no impacto ambiental (Green SLAs), são acrescentadas precupações adicionais e existem sérias implicações para os forncedores de computação em nuvem e para os seus consumidores, também devido à falta de uniformidade na multiplicidade de SLAs e padrões de serviço oferecidos. A presente pesquisa examina as fraquezas em acordos de nível de serviço oferecidos por fornecedores de computação em nuvem e estuda o impacto da ausência de um quadro de nível de serviço mínimo acordado sobre a adoção e o uso no contexto da computação em nuvem. A pesquisa está orientada para a adoção destes serviços para o caso do ensino superior e as instituições de ensino superior e propõe um modelo conceptualt com base em um modelo de serviço mínimo uniforme que funciona como referência para a indústria, para garantir a qualidade do serviço para os utilizadores da nuvem numa instituição de ensino superior de forma a eliminar as barreiras para a adoção da tecnologia de computação em nuvem. O nível de serviço mínimo proposto (MSL), fornece um conjunto mínimo de normas uniformes e na áreas das principais preocupações levantadas por responsáveis de instituições de ensino superior e que são essenciais, de modo a fornecer um referencial mínimo de qualidade, que se possa tornar um padrão uniforme em toda a indústria. O modelo proposto é uma tentativa de reduzir a barreira de adoção da tecnologia de computação em nuvem e definir normas mínimas seguidas por todos os fornecedores de computação em nuvem, independentemente do seu local de hospedagem para que os seus desempenhos possam ser medidos, avaliados e comparados em toda a indústria, para melhorar a qualidade de serviço (QoS) recebida pelos utilizadores e remova as barreiras de adoção e as preocupações dos utilizadores, bem como fomentar o aumento da concorrência em toda a indústria da computação em nuvem
    corecore