24,964 research outputs found

    IOT future in Edge Computing

    Full text link
    With the advent of Internet of Things (IoT) and data convergence using rich cloud services, data computing has been pushed to new horizons. However, much of the data generated at the edge of the network leading to the requirement of high response time. A new computing paradigm, edge computing, processing the data at the edge of the network is the need of the time. In this paper, we discuss the IoT architecture, predominant application protocols, definition of edge computing and its research opportunities

    Internet of Things future in Edge Computing

    Get PDF
    With the advent of Internet of Things (IoT) and data convergence using rich cloud services, data computing has been pushed to new horizons. However, much of the data generated at the edge of the network leading to the requirement of high response time. A new computing paradigm, edge computing, processing the data at the edge of the network is the need of the time. In this paper, we discuss the IoT architecture, predominant application protocols, definition of edge computing and its research opportunitie

    Named Functions at the Edge

    Get PDF
    As end-user and edge-network devices are becoming ever more powerful, they are producing ever increasing amounts of data. Pulling all this data into the cloud for processing is impossible, not only due to its enormous volume, but also due to the stringent latency requirements of many applications. Instead, we argue that end-user and edge-network devices should collectively form edge computing swarms and complement the cloud with their storage and processing resources. This shift from centralized to edge clouds has the potential to open new horizons for application development, supporting new low-latency services and, ultimately, creating new markets for storage and processing resources. To realize this vision, we propose Named Functions at the Edge (NFE), a platform where functions can i) be identified through a routable name, ii) be requested and moved (as data objects) to process data on demand at edge nodes, iii) pull raw or anonymized data from sensors and devices, iv) securely and privately return their results to the invoker and v) compensate each party for use of their data, storage, communication or computing resources via tracking and accountability mechanisms. We use an emergency evacuation application to motivate the need for NFE and demonstrate its potential

    INTRUSION PREDICTION SYSTEM FOR CLOUD COMPUTING AND NETWORK BASED SYSTEMS

    Get PDF
    Cloud computing offers cost effective computational and storage services with on-demand scalable capacities according to the customers’ needs. These properties encourage organisations and individuals to migrate from classical computing to cloud computing from different disciplines. Although cloud computing is a trendy technology that opens the horizons for many businesses, it is a new paradigm that exploits already existing computing technologies in new framework rather than being a novel technology. This means that cloud computing inherited classical computing problems that are still challenging. Cloud computing security is considered one of the major problems, which require strong security systems to protect the system, and the valuable data stored and processed in it. Intrusion detection systems are one of the important security components and defence layer that detect cyber-attacks and malicious activities in cloud and non-cloud environments. However, there are some limitations such as attacks were detected at the time that the damage of the attack was already done. In recent years, cyber-attacks have increased rapidly in volume and diversity. In 2013, for example, over 552 million customers’ identities and crucial information were revealed through data breaches worldwide [3]. These growing threats are further demonstrated in the 50,000 daily attacks on the London Stock Exchange [4]. It has been predicted that the economic impact of cyber-attacks will cost the global economy $3 trillion on aggregate by 2020 [5]. This thesis focused on proposing an Intrusion Prediction System that is capable of sensing an attack before it happens in cloud or non-cloud environments. The proposed solution is based on assessing the host system vulnerabilities and monitoring the network traffic for attacks preparations. It has three main modules. The monitoring module observes the network for any intrusion preparations. This thesis proposes a new dynamic-selective statistical algorithm for detecting scan activities, which is part of reconnaissance that represents an essential step in network attack preparation. The proposed method performs a statistical selective analysis for network traffic searching for an attack or intrusion indications. This is achieved by exploring and applying different statistical and probabilistic methods that deal with scan detection. The second module of the prediction system is vulnerabilities assessment that evaluates the weaknesses and faults of the system and measures the probability of the system to fall victim to cyber-attack. Finally, the third module is the prediction module that combines the output of the two modules and performs risk assessments of the system security from intrusions prediction. The results of the conducted experiments showed that the suggested system outperforms the analogous methods in regards to performance of network scan detection, which means accordingly a significant improvement to the security of the targeted system. The scanning detection algorithm has achieved high detection accuracy with 0% false negative and 50% false positive. In term of performance, the detection algorithm consumed only 23% of the data needed for analysis compared to the best performed rival detection method

    Big data, smart cities and city planning

    Get PDF
    I define big data with respect to its size but pay particular attention to the fact that the data I am referring to is urban data, that is, data for cities that are invariably tagged to space and time. I argue that this sort of data are largely being streamed from sensors, and this represents a sea change in the kinds of data that we have about what happens where and when in cities. I describe how the growth of big data is shifting the emphasis from longer term strategic planning to short-term thinking about how cities function and can be managed, although with the possibility that over much longer periods of time, this kind of big data will become a source for information about every time horizon. By way of conclusion, I illustrate the need for new theory and analysis with respect to 6 months of smart travel card data of individual trips on Greater London’s public transport systems

    Grid Infrastructure for Domain Decomposition Methods in Computational ElectroMagnetics

    Get PDF
    The accurate and efficient solution of Maxwell's equation is the problem addressed by the scientific discipline called Computational ElectroMagnetics (CEM). Many macroscopic phenomena in a great number of fields are governed by this set of differential equations: electronic, geophysics, medical and biomedical technologies, virtual EM prototyping, besides the traditional antenna and propagation applications. Therefore, many efforts are focussed on the development of new and more efficient approach to solve Maxwell's equation. The interest in CEM applications is growing on. Several problems, hard to figure out few years ago, can now be easily addressed thanks to the reliability and flexibility of new technologies, together with the increased computational power. This technology evolution opens the possibility to address large and complex tasks. Many of these applications aim to simulate the electromagnetic behavior, for example in terms of input impedance and radiation pattern in antenna problems, or Radar Cross Section for scattering applications. Instead, problems, which solution requires high accuracy, need to implement full wave analysis techniques, e.g., virtual prototyping context, where the objective is to obtain reliable simulations in order to minimize measurement number, and as consequence their cost. Besides, other tasks require the analysis of complete structures (that include an high number of details) by directly simulating a CAD Model. This approach allows to relieve researcher of the burden of removing useless details, while maintaining the original complexity and taking into account all details. Unfortunately, this reduction implies: (a) high computational effort, due to the increased number of degrees of freedom, and (b) worsening of spectral properties of the linear system during complex analysis. The above considerations underline the needs to identify appropriate information technologies that ease solution achievement and fasten required elaborations. The authors analysis and expertise infer that Grid Computing techniques can be very useful to these purposes. Grids appear mainly in high performance computing environments. In this context, hundreds of off-the-shelf nodes are linked together and work in parallel to solve problems, that, previously, could be addressed sequentially or by using supercomputers. Grid Computing is a technique developed to elaborate enormous amounts of data and enables large-scale resource sharing to solve problem by exploiting distributed scenarios. The main advantage of Grid is due to parallel computing, indeed if a problem can be split in smaller tasks, that can be executed independently, its solution calculation fasten up considerably. To exploit this advantage, it is necessary to identify a technique able to split original electromagnetic task into a set of smaller subproblems. The Domain Decomposition (DD) technique, based on the block generation algorithm introduced in Matekovits et al. (2007) and Francavilla et al. (2011), perfectly addresses our requirements (see Section 3.4 for details). In this chapter, a Grid Computing infrastructure is presented. This architecture allows parallel block execution by distributing tasks to nodes that belong to the Grid. The set of nodes is composed by physical machines and virtualized ones. This feature enables great flexibility and increase available computational power. Furthermore, the presence of virtual nodes allows a full and efficient Grid usage, indeed the presented architecture can be used by different users that run different applications

    Horizon Report 2009

    Get PDF
    El informe anual Horizon investiga, identifica y clasifica las tecnologías emergentes que los expertos que lo elaboran prevén tendrán un impacto en la enseñanza aprendizaje, la investigación y la producción creativa en el contexto educativo de la enseñanza superior. También estudia las tendencias clave que permiten prever el uso que se hará de las mismas y los retos que ellos suponen para las aulas. Cada edición identifica seis tecnologías o prácticas. Dos cuyo uso se prevé emergerá en un futuro inmediato (un año o menos) dos que emergerán a medio plazo (en dos o tres años) y dos previstas a más largo plazo (5 años)

    Remotely hosted services and 'cloud computing'

    Get PDF
    Emerging technologies for learning report - Article exploring potential of cloud computing to address educational issue

    Very short term irradiance forecasting using the lasso

    Get PDF
    We find an application of the lasso (least absolute shrinkage and selection operator) in sub-5-min solar irradiance forecasting using a monitoring network. Lasso is a variable shrinkage and selection method for linear regression. In addition to the sum of squares error minimization, it considers the sum of ℓ1-norms of the regression coefficients as penalty. This bias–variance trade-off very often leads to better predictions.<p></p> One second irradiance time series data are collected using a dense monitoring network in Oahu, Hawaii. As clouds propagate over the network, highly correlated lagged time series can be observed among station pairs. Lasso is used to automatically shrink and select the most appropriate lagged time series for regression. Since only lagged time series are used as predictors, the regression provides true out-of-sample forecasts. It is found that the proposed model outperforms univariate time series models and ordinary least squares regression significantly, especially when training data are few and predictors are many. Very short-term irradiance forecasting is useful in managing the variability within a central PV power plant.<p></p&gt
    corecore