975 research outputs found

    Exploratory study to explore the role of ICT in the process of knowledge management in an Indian business environment

    Get PDF
    In the 21st century and the emergence of a digital economy, knowledge and the knowledge base economy are rapidly growing. To effectively be able to understand the processes involved in the creating, managing and sharing of knowledge management in the business environment is critical to the success of an organization. This study builds on the previous research of the authors on the enablers of knowledge management by identifying the relationship between the enablers of knowledge management and the role played by information communication technologies (ICT) and ICT infrastructure in a business setting. This paper provides the findings of a survey collected from the four major Indian cities (Chennai, Coimbatore, Madurai and Villupuram) regarding their views and opinions about the enablers of knowledge management in business setting. A total of 80 organizations participated in the study with 100 participants in each city. The results show that ICT and ICT infrastructure can play a critical role in the creating, managing and sharing of knowledge in an Indian business environment

    Leveraging Resources on Anonymous Mobile Edge Nodes

    Get PDF
    Smart devices have become an essential component in the life of mankind. The quick rise of smartphones, IoTs, and wearable devices enabled applications that were not possible few years ago, e.g., health monitoring and online banking. Meanwhile, smart sensing laid the infrastructure for smart homes and smart cities. The intrusive nature of smart devices granted access to huge amounts of raw data. Researchers seized the moment with complex algorithms and data models to process the data over the cloud and extract as much information as possible. However, the pace and amount of data generation, in addition to, networking protocols transmitting data to cloud servers failed short in touching more than 20% of what was generated on the edge of the network. On the other hand, smart devices carry a large set of resources, e.g., CPU, memory, and camera, that sit idle most of the time. Studies showed that for plenty of the time resources are either idle, e.g., sleeping and eating, or underutilized, e.g. inertial sensors during phone calls. These findings articulate a problem in processing large data sets, while having idle resources in the close proximity. In this dissertation, we propose harvesting underutilized edge resources then use them in processing the huge data generated, and currently wasted, through applications running at the edge of the network. We propose flipping the concept of cloud computing, instead of sending massive amounts of data for processing over the cloud, we distribute lightweight applications to process data on users\u27 smart devices. We envision this approach to enhance the network\u27s bandwidth, grant access to larger datasets, provide low latency responses, and more importantly involve up-to-date user\u27s contextual information in processing. However, such benefits come with a set of challenges: How to locate suitable resources? How to match resources with data providers? How to inform resources what to do? and When? How to orchestrate applications\u27 execution on multiple devices? and How to communicate between devices on the edge? Communication between devices at the edge has different parameters in terms of device mobility, topology, and data rate. Standard protocols, e.g., Wi-Fi or Bluetooth, were not designed for edge computing, hence, does not offer a perfect match. Edge computing requires a lightweight protocol that provides quick device discovery, decent data rate, and multicasting to devices in the proximity. Bluetooth features wide acceptance within the IoT community, however, the low data rate and unicast communication limits its use on the edge. Despite being the most suitable communication protocol for edge computing and unlike other protocols, Bluetooth has a closed source code that blocks lower layer in front of all forms of research study, enhancement, and customization. Hence, we offer an open source version of Bluetooth and then customize it for edge computing applications. In this dissertation, we propose Leveraging Resources on Anonymous Mobile Edge Nodes (LAMEN), a three-tier framework where edge devices are clustered by proximities. On having an application to execute, LAMEN clusters discover and allocate resources, share application\u27s executable with resources, and estimate incentives for each participating resource. In a cluster, a single head node, i.e., mediator, is responsible for resource discovery and allocation. Mediators orchestrate cluster resources and present them as a virtually large homogeneous resource. For example, two devices each offering either a camera or a speaker are presented outside the cluster as a single device with both camera and speaker, this can be extended to any combination of resources. Then, mediator handles applications\u27 distribution within a cluster as needed. Also, we provide a communication protocol that is customizable to the edge environment and application\u27s need. Pushing lightweight applications that end devices can execute over their locally generated data have the following benefits: First, avoid sharing user data with cloud server, which is a privacy concern for many of them; Second, introduce mediators as a local cloud controller closer to the edge; Third, hide the user\u27s identity behind mediators; and Finally, enhance bandwidth utilization by keeping raw data at the edge and transmitting processed information. Our evaluation shows an optimized resource lookup and application assignment schemes. In addition to, scalability in handling networks with large number of devices. In order to overcome the communication challenges, we provide an open source communication protocol that we customize for edge computing applications, however, it can be used beyond the scope of LAMEN. Finally, we present three applications to show how LAMEN enables various application domains on the edge of the network. In summary, we propose a framework to orchestrate underutilized resources at the edge of the network towards processing data that are generated in their proximity. Using the approaches explained later in the dissertation, we show how LAMEN enhances the performance of applications and enables a new set of applications that were not feasible

    Energy-efficient Transitional Near-* Computing

    Get PDF
    Studies have shown that communication networks, devices accessing the Internet, and data centers account for 4.6% of the worldwide electricity consumption. Although data centers, core network equipment, and mobile devices are getting more energy-efficient, the amount of data that is being processed, transferred, and stored is vastly increasing. Recent computer paradigms, such as fog and edge computing, try to improve this situation by processing data near the user, the network, the devices, and the data itself. In this thesis, these trends are summarized under the new term near-* or near-everything computing. Furthermore, a novel paradigm designed to increase the energy efficiency of near-* computing is proposed: transitional computing. It transfers multi-mechanism transitions, a recently developed paradigm for a highly adaptable future Internet, from the field of communication systems to computing systems. Moreover, three types of novel transitions are introduced to achieve gains in energy efficiency in near-* environments, spanning from private Infrastructure-as-a-Service (IaaS) clouds, Software-defined Wireless Networks (SDWNs) at the edge of the network, Disruption-Tolerant Information-Centric Networks (DTN-ICNs) involving mobile devices, sensors, edge devices as well as programmable components on a mobile System-on-a-Chip (SoC). Finally, the novel idea of transitional near-* computing for emergency response applications is presented to assist rescuers and affected persons during an emergency event or a disaster, although connections to cloud services and social networks might be disturbed by network outages, and network bandwidth and battery power of mobile devices might be limited

    From Social Data Mining to Forecasting Socio-Economic Crisis

    Full text link
    Socio-economic data mining has a great potential in terms of gaining a better understanding of problems that our economy and society are facing, such as financial instability, shortages of resources, or conflicts. Without large-scale data mining, progress in these areas seems hard or impossible. Therefore, a suitable, distributed data mining infrastructure and research centers should be built in Europe. It also appears appropriate to build a network of Crisis Observatories. They can be imagined as laboratories devoted to the gathering and processing of enormous volumes of data on both natural systems such as the Earth and its ecosystem, as well as on human techno-socio-economic systems, so as to gain early warnings of impending events. Reality mining provides the chance to adapt more quickly and more accurately to changing situations. Further opportunities arise by individually customized services, which however should be provided in a privacy-respecting way. This requires the development of novel ICT (such as a self- organizing Web), but most likely new legal regulations and suitable institutions as well. As long as such regulations are lacking on a world-wide scale, it is in the public interest that scientists explore what can be done with the huge data available. Big data do have the potential to change or even threaten democratic societies. The same applies to sudden and large-scale failures of ICT systems. Therefore, dealing with data must be done with a large degree of responsibility and care. Self-interests of individuals, companies or institutions have limits, where the public interest is affected, and public interest is not a sufficient justification to violate human rights of individuals. Privacy is a high good, as confidentiality is, and damaging it would have serious side effects for society.Comment: 65 pages, 1 figure, Visioneer White Paper, see http://www.visioneer.ethz.c

    Recent Advances in Internet of Things Solutions for Early Warning Systems: A Review

    Get PDF
    none5noNatural disasters cause enormous damage and losses every year, both economic and in terms of human lives. It is essential to develop systems to predict disasters and to generate and disseminate timely warnings. Recently, technologies such as the Internet of Things solutions have been integrated into alert systems to provide an effective method to gather environmental data and produce alerts. This work reviews the literature regarding Internet of Things solutions in the field of Early Warning for different natural disasters: floods, earthquakes, tsunamis, and landslides. The aim of the paper is to describe the adopted IoT architectures, define the constraints and the requirements of an Early Warning system, and systematically determine which are the most used solutions in the four use cases examined. This review also highlights the main gaps in literature and provides suggestions to satisfy the requirements for each use case based on the articles and solutions reviewed, particularly stressing the advantages of integrating a Fog/Edge layer in the developed IoT architectures.openEsposito M.; Palma L.; Belli A.; Sabbatini L.; Pierleoni P.Esposito, M.; Palma, L.; Belli, A.; Sabbatini, L.; Pierleoni, P

    Real time collision warning system in the context of vehicle-to-vehicle data exchange based on drivings behaviours analysis

    Get PDF
    Worldwide injuries in vehicle accidents have been on the rise in recent years, mainly due to driver error regardless of technological innovations and advancements for vehicle safety. Consequently, there is a need for a reliable-real time warning system that can alert drivers of a potential collision. Vehicle-to-Vehicle (V2V) is an extensive area of ongoing research and development which has started to revolutionize the driving experience. Driving behaviour is a subject of extensive research which gains special attention due to the relationship between speeding behaviour and crashes as drivers who engage in frequent and extreme speeding behaviour are overinvolved in crashes. National Highway Traffic Safety Administration (NHTSA) set guidelines on how different vehicle automation levels may reduce vehicle crashes and how the use of on-board short-range sensors coupled with V2V technologies can help facilitate communication among vehicles. Based on the previous works, it can be seen that the assessment of drivers’ behaviours using their trajectory data is a fresh and open research field. Most studies related to driving behaviours in terms of acceleration�deceleration are evaluated at the laboratory scale using experimental results from actual vehicles. Towards this end, a five-stage methodology for a new collision warning system in the context of V2V based on driving behaviours has been designed. Real-time V2V hardware for data collection purposes was developed. Driving behaviour was analyzed in different timeframes prior obtained from actual driving behaviour in an urban environment collected from OBD-II adapter and GPS data logger of an instrumented vehicle. By measuring the in-vehicle accelerations, it is possible to categorize the driving behaviour into four main classes based on real-time experiments: safe drivers, normal, aggressive, and dangerous drivers. When the vehicle is in a risk situation, the system based on NRF24L01+PA/LNA, GPS, and OBD-II will pass a signal to the driver using a dedicated LCD and LED light signal. The driver can instantly decide to make the vehicle in a safe mood, effectively avoid the happening of vehicle accidents. The proposed solution provides two main functions: (1) the detection of the dangerous vehicles involved in the road, and (2) the display of a message informing the driver if it is safe or unsafe to pass. System performance was evaluated to ensure that it achieved the primary objective of improving road safety in the extreme behaviour of the driver in question either the safest (or the least aggressive) and the most unsafe (or the most aggressive). The proposed methodology has retained some advantages for other literature studies because of the simultaneous use of speed, acceleration, and vehicle location. The V2V based on driving behaviour experiments shows the effectiveness of the selected approach predicts behaviour with an accuracy of over 87% in sixty-four real-time scenarios presented its capability to detect behaviour and provide a warning to nearby drivers. The system failed detection only in few times when the receiving vehicle missed data due to high speed during the test as well as the distances between the moving vehicles, the data was not received correctly since the power transmitted, the frequency range of the signals, the antenna relative positions, and the number of in-range vehicles are of interest for the V2V test scenarios. The latter result supports the conclusion that warnings that efficiently and quickly transmit their information may be better when driver are under stress or time pressure

    Network coding meets multimedia: a review

    Get PDF
    While every network node only relays messages in a traditional communication system, the recent network coding (NC) paradigm proposes to implement simple in-network processing with packet combinations in the nodes. NC extends the concept of "encoding" a message beyond source coding (for compression) and channel coding (for protection against errors and losses). It has been shown to increase network throughput compared to traditional networks implementation, to reduce delay and to provide robustness to transmission errors and network dynamics. These features are so appealing for multimedia applications that they have spurred a large research effort towards the development of multimedia-specific NC techniques. This paper reviews the recent work in NC for multimedia applications and focuses on the techniques that fill the gap between NC theory and practical applications. It outlines the benefits of NC and presents the open challenges in this area. The paper initially focuses on multimedia-specific aspects of network coding, in particular delay, in-network error control, and mediaspecific error control. These aspects permit to handle varying network conditions as well as client heterogeneity, which are critical to the design and deployment of multimedia systems. After introducing these general concepts, the paper reviews in detail two applications that lend themselves naturally to NC via the cooperation and broadcast models, namely peer-to-peer multimedia streaming and wireless networkin

    Transition in Monitoring and Network Offloading - Handling Dynamic Mobile Applications and Environments

    Get PDF
    Communication demands increased significantly in recent years, as evidenced in studies by Cisco and Ericsson. Users demand connectivity anytime and anywhere, while new application domains such as the Internet of Things and vehicular networking, amplify heterogeneity and dynamics of the resource-constrained environment of mobile networks. These developments pose major challenges to an efficient utilization of existing communication infrastructure. To reduce the burden on the communication infrastructure, mechanisms for network offloading can be utilized. However, to deal with the dynamics of new application scenarios, these mechanisms need to be highly adaptive. Gathering information about the current status of the network is a fundamental requirement for meaningful adaptation. This requires network monitoring mechanisms that are able to operate under the same highly dynamic environmental conditions and changing requirements. In this thesis, we design and realize a concept for transitions within network offloading to handle the former challenges, which constitutes our first contribution. We enable adaptive offloading by introducing a methodology for the identification and encapsulation of gateway selection and clustering mechanisms in the transition-enabled service AssignMe.KOM. To handle the dynamics of environmental conditions, we allow for centralized and decentralized offloading. We generalize and show the significant impact of our concept of transitions within offloading in various, heterogeneous applications domains such as vehicular networking or publish/subscribe. We extend the methodology of identification and encapsulation to the domain of network monitoring in our second contribution. Our concept of a transition-enabled monitoring service AdaptMon.KOM enables adaptive network state observation by executing transitions between monitoring mechanisms. We introduce extensive transition coordination concepts for reconfiguration in both of our contributions. To prevent data loss during complex transition plans that cover multiple coexisting transition-enabled mechanisms, we develop the methodology of inter-proxy state transfer. We target the coexistence of our contributions for the use case of collaborative location retrieval on the example of location-based services. Based on our prototypes of AssignMe.KOM and AdaptMon.KOM, we conduct an extensive evaluation of our contributions in the Simonstrator.KOM platform. We show that our proposed inter-proxy state transfer prevents information loss, enabling seamless execution of complex transition plans that cover multiple coexisting transition-enabled mechanisms. Additionally, we demonstrate the influence of transition coordination and spreading on the success of the network adaptation. We manifest a cost-efficient and reliable methodology for location retrieval by combining our transition-enabled contributions. We show that our contributions allow for adaption on dynamic environmental conditions and requirements in network offloading and monitoring

    Game theory for cooperation in multi-access edge computing

    Get PDF
    Cooperative strategies amongst network players can improve network performance and spectrum utilization in future networking environments. Game Theory is very suitable for these emerging scenarios, since it models high-complex interactions among distributed decision makers. It also finds the more convenient management policies for the diverse players (e.g., content providers, cloud providers, edge providers, brokers, network providers, or users). These management policies optimize the performance of the overall network infrastructure with a fair utilization of their resources. This chapter discusses relevant theoretical models that enable cooperation amongst the players in distinct ways through, namely, pricing or reputation. In addition, the authors highlight open problems, such as the lack of proper models for dynamic and incomplete information scenarios. These upcoming scenarios are associated to computing and storage at the network edge, as well as, the deployment of large-scale IoT systems. The chapter finalizes by discussing a business model for future networks.info:eu-repo/semantics/acceptedVersio
    • …
    corecore