653,949 research outputs found
Rapid troubleshooting and management of a network using a chatbot
Managing a large and complex network is currently tedious and error-prone: administrators log in via lengthy authentication processes; look up and access a diversity of network devices, services, tools and other entities with differing communication protocols; issue relatively obscure, device-specific commands; etc. These problems of network management become especially manifest while troubleshooting critical issues under time constraints. This disclosure describes a fast, simple, and unified approach to network management via a chat interface. At the back end of the chat interface is a bot that is in communication with a variety of network entities in their native protocols. The bot receives aliased, nearly natural-language commands from the administrator via the chat interface, acts on such commands by communicating with relevant network entities, and returns responses via the chat interface. Per the disclosed techniques, the network can be managed over simple, widely available interfaces such as chat over mobile device, and without lengthy or complicated procedures
AstroGrid-D: Grid Technology for Astronomical Science
We present status and results of AstroGrid-D, a joint effort of
astrophysicists and computer scientists to employ grid technology for
scientific applications. AstroGrid-D provides access to a network of
distributed machines with a set of commands as well as software interfaces. It
allows simple use of computer and storage facilities and to schedule or monitor
compute tasks and data management. It is based on the Globus Toolkit middleware
(GT4). Chapter 1 describes the context which led to the demand for advanced
software solutions in Astrophysics, and we state the goals of the project. We
then present characteristic astrophysical applications that have been
implemented on AstroGrid-D in chapter 2. We describe simulations of different
complexity, compute-intensive calculations running on multiple sites, and
advanced applications for specific scientific purposes, such as a connection to
robotic telescopes. We can show from these examples how grid execution improves
e.g. the scientific workflow. Chapter 3 explains the software tools and
services that we adapted or newly developed. Section 3.1 is focused on the
administrative aspects of the infrastructure, to manage users and monitor
activity. Section 3.2 characterises the central components of our architecture:
The AstroGrid-D information service to collect and store metadata, a file
management system, the data management system, and a job manager for automatic
submission of compute tasks. We summarise the successfully established
infrastructure in chapter 4, concluding with our future plans to establish
AstroGrid-D as a platform of modern e-Astronomy.Comment: 14 pages, 12 figures Subjects: data analysis, image processing,
robotic telescopes, simulations, grid. Accepted for publication in New
Astronom
A Computer Modeling Approach Using Critical Resource Diagramming Network Analysis in Project Scheduling
The problem of resource constrained project scheduling (RCPSP) continues to be an important topic in project management. Different scheduling processes have been introduced to solve cases of RCPSP. Most of the developed methods are based on a network analysis approach. The two main technique of project network analysis used for planning, scheduling, and control are PERT and CPM. These approaches assume unlimited resource availability in project network analysis. In realistic projects, both the time and resource requirements of activities should be considered in developing network schedules. Another particularity of the methods created, so far, is the focus on activities during the scheduling process. Therefore, from a resource point of view, the current procedures do not allow the project manager to incorporate information concerning each resource unit under supervision in the scheduling process of a project. There is a need for simple tools for resource planning, scheduling, tracking, and control.
Critical resource diagramming (CRD) is a relatively new resource management tool. CRD is a simple extension to the CPM technique developed for resource management purposes. Unlike activity networks, CRD uses nodes to represent each resource unit. Also, contrasting with activities, a resource unit may appear more than once in a CRD network, specifying all different tasks to which a particular unit is assigned. Similar to CPM, the same backward and forward computations may be performed to CRD.
The CRD method can also be used in solving RCPSP problems. This present study explores that advantage of CRD. The purposes are to develop a methodology, based on CRD, which allows its use in specific case of RCPSP, to implement the developed technique using a computer model, to conduct test to validate the CRD computer model, and then to investigate the advantages and disadvantages of the introduced method. The CRD computer model will be implemented using the Visual Basic 6.0 language
Using Internet Protocols to Implement IEC 60870-5 Telecontrol Functions
The telecommunication networks of telecontrol systems
in electric utilities have undergone an innovation process.
This has removed many of their technical restrictions and made
it possible to consider carrying out telecontrol tasks with general
standard protocols instead of the specific ones that are used
currently. These are defined in the standards 60870-5, 60870-6,
and 61850 from the International Electrotechnical Commission,
among others. This paper is about the implementation, using
the services of general standard protocols, of the telecontrol
application functions defined by the standard IEC 60870-5-104.
The general protocols used to carry out telecontrol tasks are those
used in the Internet: the telecommunication network-management
protocol SNMPv3 (simple network management protocol version
3), the clock synchronization protocol network time protocol and
Secure SHell. With this new implementation, we have achieved,
among others, two important aims: 1) to improve performance
and, above all, 2) to solve the serious security problems present
in the telecontrol protocols currently being used. These problems
were presented by IEEE in an article published in the website
of the IEEE Standards Association. In this paper, the use of
general standard protocols to perform the telecontrol of electrical
networks is justified. The development of this paperâits achievements
and conclusions and the tools usedâis detailed.Junta de AndalucĂa EXC-2005-TIC-1023Ministerio de EducaciĂłn y Ciencia TEC2006-0843
Web 2.0 and micro-businesses: An exploratory investigation
This is the author's final version of the article. This article is (c) Emerald Group Publishing and permission has been granted for this version to appear here. Emerald does not grant permission for this article to be further copied/distributed or hosted elsewhere without the express permission from Emerald Group Publishing Limited.This article was chosen as a Highly Commended Award Winner at the Emerald Literati Network Awards for Excellence 2013.Purpose â The paper aims to report on an exploratory study into how small businesses use Web 2.0 information and communication technologies (ICT) to work collaboratively with other small businesses. The study had two aims: to investigate the benefits available from the use of Web 2.0 in small business collaborations, and to characterize the different types of such online collaborations.
Design/methodology/approach â The research uses a qualitative case study methodology based on semi-structured interviews with the owner-managers of 12 UK-based small companies in the business services sector who are early adopters of Web 2.0 technologies.
Findings â Benefits from the use of Web 2.0 are categorized as lifestyle benefits, internal operational efficiency, enhanced capability, external communications and enhanced service offerings. A 2Ă2 framework is developed to categorize small business collaborations using the dimensions of the basis for inter-organizational collaboration (control vs cooperation) and the level of Web 2.0 ICT use (simple vs sophisticated).
Research limitations/implications â A small number of firms of similar size, sector and location were studied, which limits generalizability. Nonetheless, the results offer a pointer to the likely future use of Web 2.0 tools by other small businesses.
Practical implications â The research provides evidence of the attraction and potential of Web 2.0 for collaborations between small businesses.
Originality/value â The paper is one of the first to report on use of Web 2.0 ICT in collaborative working between small businesses. It will be of interest to those seeking a better understanding of the potential of Web 2.0 in the small business community.WestFocu
A Study of Basic 3D Visualization Architecture for Network Operation and Management Tools
Recently, network operation tools using 3D visualization technologies have become more and more important. Generally, 3D visualized network operation tools are useful for computer network management or operation. However, a development of 3D visualized network operation tools requires advanced technical skills and highly cost.
On the other hand, 3D computer graphics technologies become more familiar in recent years because of that computer hardwares and softwares are rapidly growing and obtain high performance. In this research, we have developed basic architecture of 3D visualization system for network operation and management tools, by using an open source 3DCG software ``Blender'' and a programming language ``Python``. In this paper, we explain details, results of evaluation and efficiency of the proposed architecture
An Analysis of Data Quality Defects in Podcasting Systems
Podcasting has emerged as an asynchronous delay-tolerant method for the distribution of multimedia files through a network. Although podcasting has become a popular Internet application, users encounter frequent information quality problems in podcasting systems. To better understand the severity of these quality problems, we have applied the Total Data Quality Management methodology to podcasting. Through the application of this methodology we have quantified the data quality problems inherent within podcasting metadata, and performed an analysis that maps specific metadata defects to failures in popular commercial podcasting platforms. Furthermore, we extracted the Really Simple Syndication (RSS) feeds from the iTunes catalog for the purpose of performing the most comprehensive measurement of podcasting metadata to date. From these findings we attempted to improve the quality of podcasting data through the creation of a metadata validation tool - PodCop. PodCop extends existing RSS validation tools and encapsulates validation rules specific to the context of podcasting. We believe PodCop is the first attempt at improving the overall health of the podcasting ecosyste
An Analysis of Data Quality Defects in Podcasting Systems
Podcasting has emerged as an asynchronous delay-tolerant method for the distribution of multimedia files through a network. Although podcasting has become a popular Internet application, users encounter frequent information quality problems in podcasting systems. To better understand the severity of these quality problems, we have applied the Total Data Quality Management methodology to podcasting. Through the application of this methodology we have quantified the data quality problems inherent within podcasting metadata, and performed an analysis that maps specific metadata defects to failures in popular commercial podcasting platforms. Furthermore, we extracted the Really Simple Syndication (RSS) feeds from the iTunes catalog for the purpose of performing the most comprehensive measurement of podcasting metadata to date. From these findings we attempted to improve the quality of podcasting data through the creation of a metadata validation tool - PodCop. PodCop extends existing RSS validation tools and encapsulates validation rules specific to the context of podcasting. We believe PodCop is the first attempt at improving the overall health of the podcasting ecosyste
An Analysis of Data Quality Defects in Podcasting Systems
Podcasting has emerged as an asynchronous delay-tolerant method for the distribution of multimedia files through a network. Although podcasting has become a popular Internet application, users encounter frequent information quality problems in podcasting systems. To better understand the severity of these quality problems, we have applied the Total Data Quality Management methodology to podcasting. Through the application of this methodology we have quantified the data quality problems inherent within podcasting metadata, and performed an analysis that maps specific metadata defects to failures in popular commercial podcasting platforms. Furthermore, we extracted the Really Simple Syndication (RSS) feeds from the iTunes catalog for the purpose of performing the most comprehensive measurement of podcasting metadata to date. From these findings we attempted to improve the quality of podcasting data through the creation of a metadata validation tool - PodCop. PodCop extends existing RSS validation tools and encapsulates validation rules specific to the context of podcasting. We believe PodCop is the first attempt at improving the overall health of the podcasting ecosyste
- âŠ