75,324 research outputs found

    Migration from client/server architecture to internet computing architecture

    Get PDF
    The Internet Computing Architecture helps in providing a object-based infrastructure that can be used by the application developers to design, develop, and deploy the ntiered enterprise applications and services. For years of distributed application development, the Internet Computing Architecture has helped in providing various techniques and infrastructure software for the successful deployment of various systems, and established a foundation for the promotion of re-use and component oriented development. Object-oriented analysis is at the beginning of this architecture, which is carried through deploying and managing of finished systems. This architecture is multi-platform, multi-lingual, standards-based, and open that offers unparalleled integration capability. And for the development of mission critical systems in record time it has allowed for the reuse of the infrastructure components. This paper provides a detailed overview of the Internet Computing Architecture and the way it is applied to designing systems which can range from simple two-tier applications to n-tier Web/Object enterprise systems. Even for the best software developers and managers it is very hard to sort through alternative solutions in today\u27s business application development challenges. The problems with the potential solutions were not that complex now that the web has provided the medium for large-scale distributed computing. To implement an infrastructure for the support of applications architecture and to foster the component-oriented development and reuse is an extraordinary challenge. Further, to scale the needs of large enterprises and the Web/Internet the advancement in the multi-tiered middleware software have made the development of object-oriented systems more difficult. The Internet Computing Architecture defines a scaleable architecture, which can provide the necessary software components, which forms the basis of the solid middleware foundation and can address the different application types. For the software development process to be component-oriented the design and development methodologies are interwoven. The biggest advantage of the Internet Computing Architecture is that developers can design object application servers that can simultaneously support two- and three-tier Client/Server and Object/Web applications. This kind of flexibility allows different business objects to be reused by a large number of applications that not only supports a wide range of application architectures but also offers the flexibility in infrastructure for the integration of data sources. The server-based business objects are managed by runtime services with full support for application to be partitioned in a transactional-secure distributed environment. So for the environments that a supports high transaction volumes and a large number of users this offers a high scaleable solution. The integration of the distributed object technology with protocols of the World Wide Web is Internet Computing Architecture. Alternate means of communication between a browser on client machine and server machines are provided by various web protocols such as Hypertext Transfer Protocol and Internet Inter-ORB Protocol [NOP]. Protocols like TCP/IP also provides the addressing protocols and packetoriented transport for the Internet and Intranet communications. The recent advancements in the field of networking and worldwide web technology has promoted a new network-centric computing structure. World Wide Web evolves the global economy infrastructure both on the public and corporate Internet\u27s. The competition is growing between technologies to provide the infrastructure for distributed large-scale applications. These technologies emerge from academia, standard activities and individual vendors. Internet Computing Architecture is a comprehensive, open, Network-based architecture that provides extensibility for the design of distributed environments. Internet Computing Architecture also provides a clear understanding to integrate client/server computing with distributed object architectures and the Internet. This technology also creates the opportunity for a new emerging class of extremely powerful operational, collaboration, decision support, and e-commerce solutions which will catalyze the growth of a new networked economy based on intrabusiness, business -to-business (B2B) and business-to-consumer (B2C) electronic transactions. These network solutions would be able to incorporate legacy mainframe systems, emerging applications as well as existing client/server environment, where still most of the world\u27s mission-critical applications run. Internet Computing Architecture is the industry\u27s only cross-platform infrastructure to develop and deploy network-centric, object-based, end-to-end applications across the network. Open and de facto standards are at the core of the Internet computing architecture such as: Hyper Text Transfer Protocol (HTTP)/ Hyper Text Markup Language (HTML)/ Extensible Markup Language (XML) and Common Object Request Broker Architecture (CORBA). It has recognition, as the industry\u27s most advanced and practical technology solution for the implementation of a distributed object environment, including Interface Definition Language (IDL) for languageneutral interfaces and Internet Inter Operability (MOP) for object interoperability. Programming languages such as JAVA provides programmable, extensible and portable solutions throughout the Internet Computing Architecture. Internet Computing Architecture not only provides support, but also enhances ActiveX/Component Object Model (COM) clients through open COM/CORBA interoperability specifications. For distributed object-programming Java has also emerged as the de facto standard within the Internet/Intranet arena, making Java ideally suited to the distributed object nature of the Internet Computing Architecture. The portability that it offers across multi-tiers and platforms support open standards and makes it an excellent choice for cartridge development across all tiers

    Partitioning Large Scale Deep Belief Networks Using Dropout

    Full text link
    Deep learning methods have shown great promise in many practical applications, ranging from speech recognition, visual object recognition, to text processing. However, most of the current deep learning methods suffer from scalability problems for large-scale applications, forcing researchers or users to focus on small-scale problems with fewer parameters. In this paper, we consider a well-known machine learning model, deep belief networks (DBNs) that have yielded impressive classification performance on a large number of benchmark machine learning tasks. To scale up DBN, we propose an approach that can use the computing clusters in a distributed environment to train large models, while the dense matrix computations within a single machine are sped up using graphics processors (GPU). When training a DBN, each machine randomly drops out a portion of neurons in each hidden layer, for each training case, making the remaining neurons only learn to detect features that are generally helpful for producing the correct answer. Within our approach, we have developed four methods to combine outcomes from each machine to form a unified model. Our preliminary experiment on the mnst handwritten digit database demonstrates that our approach outperforms the state of the art test error rate.Comment: arXiv admin note: text overlap with arXiv:1207.0580 by other author

    A Fog Robotics Approach to Deep Robot Learning: Application to Object Recognition and Grasp Planning in Surface Decluttering

    Full text link
    The growing demand of industrial, automotive and service robots presents a challenge to the centralized Cloud Robotics model in terms of privacy, security, latency, bandwidth, and reliability. In this paper, we present a `Fog Robotics' approach to deep robot learning that distributes compute, storage and networking resources between the Cloud and the Edge in a federated manner. Deep models are trained on non-private (public) synthetic images in the Cloud; the models are adapted to the private real images of the environment at the Edge within a trusted network and subsequently, deployed as a service for low-latency and secure inference/prediction for other robots in the network. We apply this approach to surface decluttering, where a mobile robot picks and sorts objects from a cluttered floor by learning a deep object recognition and a grasp planning model. Experiments suggest that Fog Robotics can improve performance by sim-to-real domain adaptation in comparison to exclusively using Cloud or Edge resources, while reducing the inference cycle time by 4\times to successfully declutter 86% of objects over 213 attempts.Comment: IEEE International Conference on Robotics and Automation, ICRA, 201

    Toward Refactoring of DMARF and GIPSY Case Studies -- a Team 12 SOEN6471-S14 Project Report

    Full text link
    The main significance of this document is two source systems namely GIPSY and DMARF. Intensional languages are required like GIPSY for absoluteness and forward practical investigations on the subject.DMARF mainly focuses on software architectural design and implementation on Distributed Audio recognition and its applications such as speaker identification which can run distributively on web services architecture. This mainly highlights security aspects in a distributed system, the Java data security framework (JDSF) in DMARF. ASSL (Autonomic System Specification Language) frame work is used to integrate a self-optimizing property for DMARF. GIPSY mainly depends on Higher-Order Intensional Logic (HOIL) and reflects three main goals Generality, Adaptability and Efficiency.Comment: 35 page

    PI-Edge: A Low-Power Edge Computing System for Real-Time Autonomous Driving Services

    Full text link
    To simultaneously enable multiple autonomous driving services on affordable embedded systems, we designed and implemented {\pi}-Edge, a complete edge computing framework for autonomous robots and vehicles. The contributions of this paper are three-folds: first, we developed a runtime layer to fully utilize the heterogeneous computing resources of low-power edge computing systems; second, we developed an extremely lightweight operating system to manage multiple autonomous driving services and their communications; third, we developed an edge-cloud coordinator to dynamically offload tasks to the cloud to optimize client system energy consumption. To the best of our knowledge, this is the first complete edge computing system of a production autonomous vehicle. In addition, we successfully implemented {\pi}-Edge on a Nvidia Jetson and demonstrated that we could successfully support multiple autonomous driving services with only 11 W of power consumption, and hence proving the effectiveness of the proposed {\pi}-Edge system

    Toward Refactoring of DMARF and GIPSY Case Studies -- a Team 9 SOEN6471-S14 Project Report

    Full text link
    Software architecture consists of series of decisions taken to give a structural solution that meets all the technical and operational requirements. The paper involves code refactoring. Code refactoring is a process of changing the internal structure of the code without altering its external behavior. This paper focuses over open source systems experimental studies that are DMARF and GIPSY. We have gone through various research papers and analyzed their architectures. Refactoring improves understandability, maintainability, extensibility of the code. Code smells were identified through various tools such as JDeodorant, Logiscope, and CodePro. Reverse engineering of DMARF and GIPSY were done for understanding the system. Tool used for this was Object Aid UML. For better understanding use cases, domain model, design class diagram are built.Comment: 29 page

    A Microservice-enabled Architecture for Smart Surveillance using Blockchain Technology

    Full text link
    While the smart surveillance system enhanced by the Internet of Things (IoT) technology becomes an essential part of Smart Cities, it also brings new concerns in security of the data. Compared to the traditional surveillance systems that is built following a monolithic architecture to carry out lower level operations, such as monitoring and recording, the modern surveillance systems are expected to support more scalable and decentralized solutions for advanced video stream analysis at the large volumes of distributed edge devices. In addition, the centralized architecture of the conventional surveillance systems is vulnerable to single point of failure and privacy breach owning to the lack of protection to the surveillance feed. This position paper introduces a novel secure smart surveillance system based on microservices architecture and blockchain technology. Encapsulating the video analysis algorithms as various independent microservices not only isolates the video feed from different sectors, but also improve the system availability and robustness by decentralizing the operations. The blockchain technology securely synchronizes the video analysis databases among microservices across surveillance domains, and provides tamper proof of data in the trustless network environment. Smart contract enabled access authorization strategy prevents any unauthorized user from accessing the microservices and offers a scalable, decentralized and fine-grained access control solution for smart surveillance systems.Comment: Submitted as a position paper to the 1st International Workshop on BLockchain Enabled Sustainable Smart Cities (BLESS 2018

    Large-scale Artificial Neural Network: MapReduce-based Deep Learning

    Full text link
    Faced with continuously increasing scale of data, original back-propagation neural network based machine learning algorithm presents two non-trivial challenges: huge amount of data makes it difficult to maintain both efficiency and accuracy; redundant data aggravates the system workload. This project is mainly focused on the solution to the issues above, combining deep learning algorithm with cloud computing platform to deal with large-scale data. A MapReduce-based handwriting character recognizer will be designed in this project to verify the efficiency improvement this mechanism will achieve on training and practical large-scale data. Careful discussion and experiment will be developed to illustrate how deep learning algorithm works to train handwritten digits data, how MapReduce is implemented on deep learning neural network, and why this combination accelerates computation. Besides performance, the scalability and robustness will be mentioned in this report as well. Our system comes with two demonstration software that visually illustrates our handwritten digit recognition/encoding application

    An Edge-Computing Based Architecture for Mobile Augmented Reality

    Full text link
    In order to mitigate the long processing delay and high energy consumption of mobile augmented reality (AR) applications, mobile edge computing (MEC) has been recently proposed and is envisioned as a promising means to deliver better quality of experience (QoE) for AR consumers. In this article, we first present a comprehensive AR overview, including the indispensable components of general AR applications, fashionable AR devices, and several existing techniques for overcoming the thorny latency and energy consumption problems. Then, we propose a novel hierarchical computation architecture by inserting an edge layer between the conventional user layer and cloud layer. Based on the proposed architecture, we further develop an innovated operation mechanism to improve the performance of mobile AR applications. Three key technologies are also discussed to further assist the proposed AR architecture. Simulation results are finally provided to verify that our proposals can significantly improve the latency and energy performance as compared against existing baseline schemes.Comment: This manuscript has been accepted by IEEE Networ

    Recent Advances and Challenges in Ubiquitous Sensing

    Full text link
    Ubiquitous sensing is tightly coupled with activity recognition. This survey reviews recent advances in Ubiquitous sensing and looks ahead on promising future directions. In particular, Ubiquitous sensing crosses new barriers giving us new ways to interact with the environment or to inspect our psyche. Through sensing paradigms that parasitically utilise stimuli from the noise of environmental, third-party pre-installed systems, sensing leaves the boundaries of the personal domain. Compared to previous environmental sensing approaches, these new systems mitigate high installation and placement cost by providing a robustness towards process noise. On the other hand, sensing focuses inward and attempts to capture mental activities such as cognitive load, fatigue or emotion through advances in, for instance, eye-gaze sensing systems or interpretation of body gesture or pose. This survey summarises these developments and discusses current research questions and promising future directions.Comment: Submitted to PIEE
    • …
    corecore