398,553 research outputs found

    Web based applications for energy management system incorporated network reconfiguration using genetic algorithm / Wan Adnan Wan

    Get PDF
    The Web Based Applications for Energy Management System in UiTM has established itself as a strong medium for distributed computing: a network user interface that is powerful and an independent platform. This research was involved with the development of Web Based Applications for Energy Management System for providing Web application system, Web monitoring System and Genetic algorithm Based Network Reconfiguration Technique for loss minimization in the UiTM Distribution System. Web Based Applications for Energy Management System can be broadly classified into three basic methods. The Web Monitoring System is a sort of information system in which during a certain time on a systematic way, data are being collected, handled, managed, analyzed and presented data from energy meter (33 ION™ ). The Web Application System program is designed for the client to configure a system (e.g calculates a load flow). The development on Web Application system uses An ActiveX technology approach. The Genetic Algorithm Based Network Reconfiguration Technique for loss minimization in the UiTM Distribution System is also proposed based on general combinatorial optimization algorithm. The development uses language Active Server Pages (ASP), HTML and C/C++. This program was tested on a Windows platform, which is a typical development environment for Web Based Applications for Energy Management System in UiTM. It provides access to UiTM’s personnel via Internet or Network UiTM. The results show that optimal configuration of 32 number of feeder or the substation in UiTM could provide loss minimization, reduces the active power loss in a power system at UiTM distribution network

    Web-based decision support system for industrial operations management

    Get PDF
    To maintain sustainability in today’s’ global economy industrial companies must have well managed systems and operations to keep up with competition. For this they can take advantage of using Web and Internet based technologies. Such can be achieved by accessing good management resources and methods through the Internet which otherwise would not be available and, at the same time, take advantageous of collaboration provided by networks of partners and users. Due to operations management complexity when a company does not have access to good algorithms it usually draws upon simple and empirical procedures whose quality of solutions provided tends to be poor. This is a situation that can be avoided if companies have easy access to good operations management algorithms or services. This can be possible because a pool of knowledge on industrial operations management, which has been developed by academia and industry over the years, can be made available, through the Internet, to a large community of users. This idea is explored towards development of a web-based system for Industrial Operation Management, based on a P2P network of operations management algorithms providers and users. Thus, the paper describes a web system for aiding the resolution of Operations Management problems through collaboration based on a network of distributed resources and users, web services and other Internet technology. The system adopts a P2P network architecture to create and enable a decentralized and global industrial operations management environment. It includes a set of functionalities accessed through the P2P network, which holds algorithms for solving different types of Operations Management problems. The algorithms are selected through a user-friendly interface, which is automatically generated for each specific problem context, including loading existing XML problem data documents, and searching and running algorithms on the peers belonging to the P2P network.Universidade do MinhoFundação para a Ciência e a Tecnologia (FCT

    online clearance system (OC System)

    Get PDF
    The purpose of this project is to design a system for UTP Security Department, fnformation Resource Centre Department and finance Department specifically for the administration of students and staffs' administration management. The system will be web - based applications that can be executed using normal web browser for inter - platform capabilities. The project is divided into two terms, first the research on clearance system for final semester student and second development on the Online Clearance System (OC System). Research on the OC System will be based on the problem statement and objective of the project while the research for the current clearance system for fmal semester at UTP is the support idea for the project. This document also gives further information about the system in the literature review/theory section. This section includes the features of the system, the benefits from using the system, and the data flow diagram of the intended system. Part of the Final Year Project, student manage to get known with the business environment on how they manage their database, clearance system and the performance of the staff. In using Online Clearance System (OC System) give the best solution for the staff as the database plays an important asset for the staff. l would use distributed database in my system.A distributed database is a database that is under the control of a central database management system in which storage devices are not all attached to a common CPU. It may be stored in multiple computers located in the same physical location, or may be dispersed over a network of interconnected computers. Collections of data (eg. in a database) can be distributed across multiple physical locations. A distributed database is distributed into separate partitions/fragments. Each partition/fragment of a distributed database may be replicated (ie. redundant fail-overs, R.AlD fike)

    Performance of distributed information systems

    Get PDF
    There is an increasing use of distributed computer systems to provide services in both traditional telephony as well as in the Internet. Two main technologies are Distributed Object Computing (DOC) and Web based services. One common DOC architecture investigated in this thesis is the Common Object Request Broker Architecture (CORBA), specified by the Object Management Group. CORBA applications consist of interacting software components called objects. Two other DOC architectures investigated are the Telecommunications Information Net- working Architecture (TINA) and a CORBA based Intelligent Network (IN/CORBA) system. In a DOC environment, the objects of an application are distributed on mul- tiple nodes. A middleware layer makes the distribution transparent to the application. However, the distributed nature creates a number of potential performance problems. Three problems in DOC systems are examined in this thesis: object distribution, load balancing and overload protection. An object distribution describes how objects are distributed in the network. The objective is to distribute the objects on the physical nodes in such a way that intern-node communication overhead is as small as possible. One way to solve the object distribution problem is to use linear programming. The constraints for the problem are then given by both ease of management of the system and performance concerns. Load balancing is used when there are multiple objects that can be used at a particular time. The objective of load balancing is to distribute the load e±ciently on the available nodes. This thesis investigates a number of de- centralized load balancing mechanisms, including one based on the use of intelligent agents. Finally, overload protection mechanisms for DOC systems are investigated. While overload protection is well-researched for telecom networks, only little work has been performed previously concerning DOC and overload protection. Also, this thesis examines the use of overload protection in e-commerce web servers. Two schemes are compared, one which handles admission to the e-commerce site on request basis, and another which handles admission on session basis. The session based mechanism is shown to be better in terms of user-experienced performance

    Towards a decentralised common data environment using linked building data and the solid ecosystem

    Get PDF
    With the emergence of Building Information Modelling (BIM), the construction industry is rapidly catching up with the digital revolution that has boosted productivity in virtually all economic sectors. In current practice, the focus of BIM lies on exchange of documents, often through proprietary formats exchanged using the Industry Foundation Classes (IFC). However, with web technologies such as RDF, OWL and SPARQL, a data- and web-based BIM paradigm becomes within reach. The decentralisation of data and decoupling of information and applications will enhance a more general adoption of Big Open BIM, and is expected to lower the BIM threshold for smaller companies that are active in different phases of the building life cycle. Since one of the promises of the Semantic Web and Linked Data is a highly improved interoperability between different disciplines, it is not necessary to reinvent the wheel for the setup of an infrastructure that supports such a network of decentralised tools and data. In this paper, we evaluate the specifications provided by the Solid project (Inrupt Inc.), a Linked Data-based ecosystem for Social Linked Data. Although the exemplary use case of the Solid ecosystem is decentralisation of data and applications for social network purposes, we notice a considerable overlap with recent ambitions and challenges for a web-based AECO industry (Architecture, Engineering, Construction and Operation). This includes standardised data representations, role- or actor-based authorisation and authentication and the need for modular and extensible applications, dedicated to a specific use case. After a brief introduction to Linked Data and its applications in the building industry, we discuss present solutions for building data management (Common Data Environments, multimodels, etc.). In order to translate these approaches towards a Linked Data context with minimal effort and maximal effect, we then review the Solid specifications for use in a construction-oriented web ecosystem. As a proof of concept, we discuss the setup of a web-service for creation and management of Linked Building Data, generated with the Solid-React generator. This application is envisaged as a bridge between the multiple data stores of different project stakeholders and the end user. It acts as an interface to a distributed Common Data Environment that also allows the generation of multi-models

    A LOW-COST ICT SOLUTION TO SUPPORT VISITORS IN TOURISTIC CAVES

    Get PDF
    This study aims to examine innovative solutions to enhance the tourist experience and visitor localization within the Bossea Cave, one of the most important karstic caves in Italy, located in the municipality of Frabosa Soprana. The lack of advanced technological tools for managing cave structures and modernizing visitor tours creates a significant opportunity for the integration of new technologies. The researchers propose a low-cost and modular hardware infrastructure, consisting of a series of single-board computers distributed throughout the cave, acting as local web servers to provide visitors with customizable multimedia content, accessible through a web application on their personal devices via a local Wi-Fi network. This infrastructure also enables visitor localization based on their connection point within the cave, with the additional goal of testing the ultra-wideband (UWB) wireless technology in this complex and humid environment. The UWB technology offers high-precision localization, even in indoor environments and caves, where GNSS signals are not available. Overall, the study provides a promising solution to enhance the visitor experience and offers opportunities for cave management and research

    D 3 -MapReduce: Towards MapReduce for Distributed and Dynamic Data Sets

    Get PDF
    International audienceSince its introduction in 2004 by Google, MapRe-duce has become the programming model of choice for processing large data sets. Although MapReduce was originally developed for use by web enterprises in large data-centers, this technique has gained a lot of attention from the scientific community for its applicability in large parallel data analysis (including geographic, high energy physics, genomics, etc.). So far MapReduce has been mostly designed for batch processing of bulk data. The ambition of D 3-MapReduce is to extend the MapReduce programming model and propose efficient implementation of this model to: i) cope with distributed data sets, i.e. that span over multiple distributed infrastructures or stored on network of loosely connected devices; ii) cope with dynamic data sets, i.e. which dynamically change over time or can be either incomplete or partially available. In this paper, we draw the path towards this ambitious goal. Our approach leverages Data Life Cycle as a key concept to provide MapReduce for distributed and dynamic data sets on heterogeneous and distributed infrastructures. We first report on our attempts at implementing the MapReduce programming model for Hybrid Distributed Computing Infrastructures (Hybrid DCIs). We present the architecture of the prototype based on BitDew, a middleware for large scale data management, and Active Data, a programming model for data life cycle management. Second, we outline the challenges in term of methodology and present our approaches based on simulation and emulation on the Grid'5000 experimental testbed. We conduct performance evaluations and compare our prototype with Hadoop, the industry reference MapReduce implementation. We present our work in progress on dynamic data sets that has lead us to implement an incremental MapReduce framework. Finally, we discuss our achievements and outline the challenges that remain to be addressed before obtaining a complete D 3-MapReduce environment

    Collaborative Resource Allocation

    Get PDF
    Collaborative Resource Allocation Networking Environment (CRANE) Version 0.5 is a prototype created to prove the newest concept of using a distributed environment to schedule Deep Space Network (DSN) antenna times in a collaborative fashion. This program is for all space-flight and terrestrial science project users and DSN schedulers to perform scheduling activities and conflict resolution, both synchronously and asynchronously. Project schedulers can, for the first time, participate directly in scheduling their tracking times into the official DSN schedule, and negotiate directly with other projects in an integrated scheduling system. A master schedule covers long-range, mid-range, near-real-time, and real-time scheduling time frames all in one, rather than the current method of separate functions that are supported by different processes and tools. CRANE also provides private workspaces (both dynamic and static), data sharing, scenario management, user control, rapid messaging (based on Java Message Service), data/time synchronization, workflow management, notification (including emails), conflict checking, and a linkage to a schedule generation engine. The data structure with corresponding database design combines object trees with multiple associated mortal instances and relational database to provide unprecedented traceability and simplify the existing DSN XML schedule representation. These technologies are used to provide traceability, schedule negotiation, conflict resolution, and load forecasting from real-time operations to long-range loading analysis up to 20 years in the future. CRANE includes a database, a stored procedure layer, an agent-based middle tier, a Web service wrapper, a Windows Integrated Analysis Environment (IAE), a Java application, and a Web page interface

    Standardization of the Local/Regional Networks and Web Infrastructure for Database Processing, and Its Role in Development of the Regionally Integrated Businesses

    Get PDF
    Most of the people who use web services and Internet network, are very little aware on computer network infrastructure, that enables to use the same information technology infrastructure in several countries of a region. With the evolution of new technology, networks have become more complex and they play a vital role in helping business performance. Taking advantage of technology, with the help of computer networks, new services can be added, and thereby will increase the productivity. This paper includes information on the importance of computer networks for the implementation and delivery of information based distributed computer networks. Developing applications for Internet / Intranet seeks to overcome the limitations of Web interface forms. Administrators of databases today, in Internet-based environment, should understand the implications of Web applications development for the database, in terms of: the type of extended support data, security of database, transaction management the database, and the design of the database. Internet/Intranet applications designed for larger companies, are complex because of of the segmented business widespread, they have more employees and they have the great variety of different functions. DOI: 10.5901/ajis.2015.v4n2s1p9
    • …
    corecore