721,579 research outputs found

    Enhancing LTE with Cloud-RAN and Load-Controlled Parasitic Antenna Arrays

    Get PDF
    Cloud radio access network systems, consisting of remote radio heads densely distributed in a coverage area and connected by optical fibers to a cloud infrastructure with large computational capabilities, have the potential to meet the ambitious objectives of next generation mobile networks. Actual implementations of C-RANs tackle fundamental technical and economic challenges. In this article, we present an end-to-end solution for practically implementable C-RANs by providing innovative solutions to key issues such as the design of cost-effective hardware and power-effective signals for RRHs, efficient design and distribution of data and control traffic for coordinated communications, and conception of a flexible and elastic architecture supporting dynamic allocation of both the densely distributed RRHs and the centralized processing resources in the cloud to create virtual base stations. More specifically, we propose a novel antenna array architecture called load-controlled parasitic antenna array (LCPAA) where multiple antennas are fed by a single RF chain. Energy- and spectral-efficient modulation as well as signaling schemes that are easy to implement are also provided. Additionally, the design presented for the fronthaul enables flexibility and elasticity in resource allocation to support BS virtualization. A layered design of information control for the proposed end-to-end solution is presented. The feasibility and effectiveness of such an LCPAA-enabled C-RAN system setup has been validated through an over-the-air demonstration

    Design of Wireless Communication Networks for Cyber-Physical Systems with Application to Smart Grid

    Get PDF
    Cyber-Physical Systems (CPS) are the next generation of engineered systems in which computing, communication, and control technologies are tightly integrated. On one hand, CPS are generally large with components spatially distributed in physical world that has lots of dynamics; on the other hand, CPS are connected, and must be robust and responsive. Smart electric grid, smart transportation system are examples of emerging CPS that have significant and far-reaching impact on our daily life. In this dissertation, we design wireless communication system for CPS. To make CPS robust and responsive, it is critical to have a communication subsystem that is reliable, adaptive, and scalable. Our design uses a layered structure, which includes physical layer, multiple access layer, network layer, and application layer. Emphases are placed on multiple access and network layer. At multiple access layer, we have designed three approaches, namely compressed multiple access, sample-contention multiple access, and prioritized multiple access, for reliable and selective multiple access. At network layer, we focus on the problem of creating reliable route, with service interruption anticipated. We propose two methods: the first method is a centralized one that creates backup path around zones posing high interruption risk; the other method is a distributed one that utilizes Ant Colony Optimization (ACO) and positive feedback, and is able to update multipath dynamically. Applications are treated as subscribers to the data service provided by the communication system. Their data quality requirements and Quality of Service (QoS) feedback are incorporated into cross-layer optimization in our design. We have evaluated our design through both simulation and testbed. Our design demonstrates desired reliability, scalability and timeliness in data transmission. Performance gain is observed over conventional approaches as such random access

    Web service control of component-based agile manufacturing systems

    Get PDF
    Current global business competition has resulted in significant challenges for manufacturing and production sectors focused on shorter product lifecyc1es, more diverse and customized products as well as cost pressures from competitors and customers. To remain competitive, manufacturers, particularly in automotive industry, require the next generation of manufacturing paradigms supporting flexible and reconfigurable production systems that allow quick system changeovers for various types of products. In addition, closer integration of shop floor and business systems is required as indicated by the research efforts in investigating "Agile and Collaborative Manufacturing Systems" in supporting the production unit throughout the manufacturing lifecycles. The integration of a business enterprise with its shop-floor and lifecycle supply partners is currently only achieved through complex proprietary solutions due to differences in technology, particularly between automation and business systems. The situation is further complicated by the diverse types of automation control devices employed. Recently, the emerging technology of Service Oriented Architecture's (SOA's) and Web Services (WS) has been demonstrated and proved successful in linking business applications. The adoption of this Web Services approach at the automation level, that would enable a seamless integration of business enterprise and a shop-floor system, is an active research topic within the automotive domain. If successful, reconfigurable automation systems formed by a network of collaborative autonomous and open control platform in distributed, loosely coupled manufacturing environment can be realized through a unifying platform of WS interfaces for devices communication. The adoption of SOA- Web Services on embedded automation devices can be achieved employing Device Profile for Web Services (DPWS) protocols which encapsulate device control functionality as provided services (e.g. device I/O operation, device state notification, device discovery) and business application interfaces into physical control components of machining automation. This novel approach supports the possibility of integrating pervasive enterprise applications through unifying Web Services interfaces and neutral Simple Object Access Protocol (SOAP) message communication between control systems and business applications over standard Ethernet-Local Area Networks (LAN's). In addition, the re-configurability of the automation system is enhanced via the utilisation of Web Services throughout an automated control, build, installation, test, maintenance and reuse system lifecycle via device self-discovery provided by the DPWS protocol...cont'd

    Special section Industry 4.0: Challenges for the future in manufacturing

    Get PDF
    International audienceThe sensing enterprise is a digital business innovation concept making Cyber-Physical Systems, service-oriented architectures and advanced human-computer interactions converge, supporting a more agile, flexible, and proactive management of unexpected events in today’s global value networks. In essence, it concerns the adoption of future Internet technologies in virtual enterprises. Translating this concept to a general approach to smart systems (smart manufacturing, smart cities, smart logistics, etc.), requires new capabilities by next-generation information systems to perform sensing, modelling, and interpretation of “any” signal from the real world, thus providing the systems with higher flexibility and possibilities for reconfiguration (Panetto et al. 2016). Intuitively, a sensing system requires resources and machineries to be constantly monitored, configured, and easily controlled by human operators. All these functions, and much more indeed, are now implemented by the so-called (Industrial) Internet of Things or Cyber-Physical Systems. With the advent of the new cyber-physical system design paradigm, the number and diversity of systems that need to work together in the future enterprises have significantly increased (Weichhart et al. 2016). This trend highlights the need to shift from the classic central control of systems, towards systems interoperability as a capability to control, sense, and perceive distributed and heterogeneous systems and their environments, as well as to purposefully and socially act upon their perceptions. Such a shift could have important consequences on the future architecture design of the control of these systems. The emergence of cloud-based technologies will also have a significant impact on the design and implementation of cyber-physical systems; using such novel technologies, collaborative engineering practises will increase globally, thus enabling a new generation of small-scale industrial organizations to function in an information-centric manner and enabling industry 4.0 transformations (Cimini, et al, 2017). The potential of such technologies in fostering a leaner and more agile approach towards engineering is very high. Engineers and engineering organizations no longer have to be restricted to the availability of advanced processing capabilities, as they can adopt a ‘pay as you go’ approach, which will enable them to access and use software resources for engineering activities from any remote location in the world

    Big Data Processing Attribute Based Access Control Security

    Get PDF
    The purpose of this research is to analyze the security of next-generation big data processing (BDP) and examine the feasibility of applying advanced security features to meet the needs of modern multi-tenant, multi-level data analysis. The research methodology was to survey of the status of security mechanisms in BDP systems and identify areas that require further improvement. Access control (AC) security services were identified as priority area, specifically Attribute Based Access Control (ABAC). The exemplar BDP system analyzed is the Apache Hadoop ecosystem. We created data generation software, analysis programs, and posted the detailed the experiment configuration on GitHub. Overall, our research indicates that before a BDP system, such as Hadoop, can be used in operational environment significant security configurations are required. We believe that the tools are available to achieve a secure system, with ABAC, using Apache Ranger and Apache Atlas. However, these systems are immature and require verification by an independent third party. We identified the following specific actions for overall improvement: consistent provisioning of security services through a data analyst workstation, a common backplane of security services, and a management console. These areas are partially satisfied in the current Hadoop ecosystem, continued AC improvements through the open source community, and rigorous independent testing should further address remaining security challenges. Robust security will enable further use of distributed, cluster BDP, such as Apache Hadoop and Hadoop-like systems, to meet future government and business requirements

    On the Merits of Deploying TDM-based Next-Generation PON Solutions in the Access Arena As Multiservice, All Packet-Based 4G Mobile Backhaul RAN Architecture

    Full text link
    The phenomenal growth of mobile backhaul capacity required to support the emerging fourth-generation (4G) traffic including mobile WiMAX, cellular Long-Term Evolution (LTE), and LTE-Advanced (LTE-A) requires rapid migration from today\u27s legacy circuit switched T1/E1 wireline and microwave backhaul technologies to a new fiber-supported, all-packet-based mobile backhaul infrastructure. Clearly, a cost effective fiber supported all-packet-based mobile backhaul radio access network (RAN) architecture that is compatible with these inherently distributed 4G RAN architectures is needed to efficiently scale current mobile backhaul networks. However, deploying a green fiber-based mobile backhaul infrastructure is a costly proposition mainly due to the significant cost associated with digging the trenches in which the fiber is to be laid. These, along with the inevitable trend towards all-IP/Ethernet transport protocols and packet switched networks, have prompted many carriers around the world to consider the potential of utilizing the existing fiber-based Passive Optical Network (PON) access infrastructure as an all-packet-based converged fixed-mobile optical access networking transport architecture to backhaul both mobile and typical wireline traffic. Passive Optical Network (PON)-based fiber-to-the-curb/home (FTTC/FTTH) access networks are being deployed around the globe based on two Time-Division Multiplexed (TDM) standards: ITU G.984 Gigabit PON (GPON) and IEEE 802.ah Ethernet PON (EPON). A PON connects a group of Optical Network Units (ONUs) located at the subscriber premises to an Optical Line Terminal (OLT) located at the service provider\u27s facility. It is the purpose of this thesis to examine the technological requirements and assess the performance analysis and feasibility for deploying TDM-based next-generation (NG) PON solutions in the access arena as multiservice, all packet-based 4G mobile backhaul RAN and/or converged fixed-mobile optical networking architecture. Specifically, this work proposes and devises a simple and cost-effective 10G-EPON-based 4G mobile backhaul RAN architecture that efficiently transports and supports a wide range of existing and emerging fixed-mobile advanced multimedia applications and services along with the diverse quality of service (QoS), rate, and reliability requirements set by these services. The techno-economics merits of utilizing PON-based 4G RAN architecture versus that of traditional 4G (mobile WiMAX and LTE) RAN will be thoroughly examine and quantified. To achieve our objective, we utilize the existing fiber-based PON access infrastructure with novel ring-based distribution access network and wireless-enabled OLT and ONUs as the multiservice packet-based 4G mobile backhaul RAN infrastructure. Specifically, to simplify the implementation of such a complex undertaking, this work is divided into two sequential phases. In the first phase, we examine and quantify the overall performance of the standalone ring-based 10G-EPON architecture (just the wireline part without overlaying/incorporating the wireless part (4G RAN)) via modeling and simulations. We then assemble the basic building blocks, components, and sub-systems required to build up a proof-of-concept prototype testbed for the standalone ring-based EPON architecture. The testbed will be used to verify and demonstrate the performance of the standalone architecture, specifically, in terms of power budget, scalability, and reach. In the second phase, we develop an integrated framework for the efficient interworking between the two wireline PON and 4G mobile access technologies, particularly, in terms of unified network control and management (NCM) operations. Specifically, we address the key technical challenges associated with tailoring a typically centralized PON-based access architecture to interwork with and support a distributed 4G RAN architecture and associated radio NCM operations. This is achieved via introducing and developing several salient-networking innovations that collectively enable the standalone EPON architecture to support a fully distributed 4G mobile backhaul RAN and/or a truly unified NG-PON-4G access networking architecture. These include a fully distributed control plane that enables intercommunication among the access nodes (ONUs/BSs) as well as signaling, scheduling algorithms, and handoff procedures that operate in a distributed manner. Overall, the proposed NG-PON architecture constitutes a complete networking paradigm shift from the typically centralized PON\u27s architecture and OLT-based NCM operations to a new disruptive fully distributed PON\u27s architecture and NCM operations in which all the typically centralized OLT-based PON\u27s NCM operations are migrated to and independently implemented by the access nodes (ONUs) in a distributed manner. This requires migrating most of the typically centralized wireline and radio control and user-plane functionalities such as dynamic bandwidth allocation (DBA), queue management and packet scheduling, handover control, radio resource management, admission control, etc., typically implemented in today\u27s OLT/RNC, to the access nodes (ONUs/4G BSs). It is shown that the overall performance of the proposed EPON-based 4G backhaul including both the RAN and Mobile Packet Core (MPC) {Evolved Packet Core (EPC) per 3GPP LTE\u27s standard} is significantly augmented compared to that of the typical 4G RAN, specifically, in terms of handoff capability, signaling overhead, overall network throughput and latency, and QoS support. Furthermore, the proposed architecture enables redistributing some of the intelligence and NCM operations currently centralized in the MPC platform out into the access nodes of the mobile RAN. Specifically, as this work will show, it enables offloading sizable fraction of the mobile signaling as well as actual local upstream traffic transport and processing (LTE bearers switch/set-up, retain, and tear-down and associated signaling commands from the BSs to the EPC and vice-versa) from the EPC to the access nodes (ONUs/BSs). This has a significant impact on the performance of the EPC. First, it frees up a sizable fraction of the badly needed network resources as well as processing on the overloaded centralized serving nodes (AGW) in the MPC. Second, it frees up capacity and sessions on the typically congested mobile backhaul from the BSs to the EPC and vice-versa

    Software design and code generation for the engineering graphical user interface of the ASTRI SST-2M prototype for the Cherenkov Telescope Array

    Get PDF
    ASTRI is an on-going project developed in the framework of the Cherenkov Telescope Array (CTA). An end- to-end prototype of a dual-mirror small-size telescope (SST-2M) has been installed at the INAF observing station on Mt. Etna, Italy. The next step is the development of the ASTRI mini-array composed of nine ASTRI SST-2M telescopes proposed to be installed at the CTA southern site. The ASTRI mini-array is a collaborative and international effort carried on by Italy, Brazil and South-Africa and led by the Italian National Institute of Astrophysics, INAF. To control the ASTRI telescopes, a specific ASTRI Mini-Array Software System (MASS) was designed using a scalable and distributed architecture to monitor all the hardware devices for the telescopes. Using code generation we built automatically from the ASTRI Interface Control Documents a set of communication libraries and extensive Graphical User Interfaces that provide full access to the capabilities offered by the telescope hardware subsystems for testing and maintenance. Leveraging these generated libraries and components we then implemented a human designed, integrated, Engineering GUI for MASS to perform the verification of the whole prototype and test shared services such as the alarms, configurations, control systems, and scientific on-line outcomes. In our experience the use of code generation dramatically reduced the amount of effort in development, integration and testing of the more basic software components and resulted in a fast software release life cycle. This approach could be valuable for the whole CTA project, characterized by a large diversity of hardware components

    Resource Brokering in Grid Computing

    Get PDF
    Grid Computing has emerged in the academia and evolved towards the bases of what is currently known as Cloud Computing and Internet of Things (IoT). The vast collection of resources that provide the nature for Grid Computing environment is very complex; multiple administrative domains control access and set policies to the shared computing resources. It is a decentralized environment with geographically distributed computing and storage resources, where each computing resource can be modeled as an autonomous computing entity, yet collectively can work together. This is a class of Cooperative Distributed Systems (CDS). We extend this by applying characteristic of open environments to create a foundation for the next generation of computing platform where entities are free to join a computing environment to provide capabilities and take part as a collective in solving complex problems beyond the capability of a single entity. This thesis is focused on modeling “Computing” as a collective performance of individual autonomous fundamental computing elements interconnected in a “Grid” open environment structure. Each computing element is a node in the Grid. All nodes are interconnected through the “Grid” edges. Resource allocation is done at the edges of the “Grid” where the connected nodes are simply used to perform computation. The analysis put forward in this thesis identifies Grid Computing as a form of computing that occurs at the resource level. The proposed solution, coupled with advancements in technology and evolution of new computing paradigms, sets a new direction for grid computing research. The approach here is a leap forward with the well-defined set of requirements and specifications based on open issues with the focus on autonomy, adaptability and interdependency. The proposed approach examines current model for Grid Protocol Architecture and proposes an extension that addresses the open issues in the diverged set of solutions that have been created

    Health Management Applications for International Space Station

    Get PDF
    Traditional mission and vehicle management involves teams of highly trained specialists monitoring vehicle status and crew activities, responding rapidly to any anomalies encountered during operations. These teams work from the Mission Control Center and have access to engineering support teams with specialized expertise in International Space Station (ISS) subsystems. Integrated System Health Management (ISHM) applications can significantly augment these capabilities by providing enhanced monitoring, prognostic and diagnostic tools for critical decision support and mission management. The Intelligent Systems Division of NASA Ames Research Center is developing many prototype applications using model-based reasoning, data mining and simulation, working with Mission Control through the ISHM Testbed and Prototypes Project. This paper will briefly describe information technology that supports current mission management practice, and will extend this to a vision for future mission control workflow incorporating new ISHM applications. It will describe ISHM applications currently under development at NASA and will define technical approaches for implementing our vision of future human exploration mission management incorporating artificial intelligence and distributed web service architectures using specific examples. Several prototypes are under development, each highlighting a different computational approach. The ISStrider application allows in-depth analysis of Caution and Warning (C&W) events by correlating real-time telemetry with the logical fault trees used to define off-nominal events. The application uses live telemetry data and the Livingstone diagnostic inference engine to display the specific parameters and fault trees that generated the C&W event, allowing a flight controller to identify the root cause of the event from thousands of possibilities by simply navigating animated fault tree models on their workstation. SimStation models the functional power flow for the ISS Electrical Power System and can predict power balance for nominal and off-nominal conditions. SimStation uses realtime telemetry data to keep detailed computational physics models synchronized with actual ISS power system state. In the event of failure, the application can then rapidly diagnose root cause, predict future resource levels and even correlate technical documents relevant to the specific failure. These advanced computational models will allow better insight and more precise control of ISS subsystems, increasing safety margins by speeding up anomaly resolution and reducing,engineering team effort and cost. This technology will make operating ISS more efficient and is directly applicable to next-generation exploration missions and Crew Exploration Vehicles
    • 

    corecore