138 research outputs found

    Information Technology and Systems - II:Server Administration Networks

    Get PDF
    A majority of IS graduates (56% in one recent survey] are involved in server administration, network administration and IS security work. An important recent innovation in these areas is the deployment of separate networks dedicated to server administration and related tasks, combining the cost and productivity advantages of remote administration with risk levels comparable to console-based administrative access. Remote server administration is a previously undocumented artisanal tradition that evolved in scientific and technical network environments, and is now becoming applicable to an increasing range of business networks. This tutorial article provides an overview of current server administration network architectures, and of the software, workstation, and user interface technologies associated with remote server administration

    BIRCH: A user-oriented, locally-customizable, bioinformatics system

    Get PDF
    BACKGROUND: Molecular biologists need sophisticated analytical tools which often demand extensive computational resources. While finding, installing, and using these tools can be challenging, pipelining data from one program to the next is particularly awkward, especially when using web-based programs. At the same time, system administrators tasked with maintaining these tools do not always appreciate the needs of research biologists. RESULTS: BIRCH (Biological Research Computing Hierarchy) is an organizational framework for delivering bioinformatics resources to a user group, scaling from a single lab to a large institution. The BIRCH core distribution includes many popular bioinformatics programs, unified within the GDE (Genetic Data Environment) graphic interface. Of equal importance, BIRCH provides the system administrator with tools that simplify the job of managing a multiuser bioinformatics system across different platforms and operating systems. These include tools for integrating locally-installed programs and databases into BIRCH, and for customizing the local BIRCH system to meet the needs of the user base. BIRCH can also act as a front end to provide a unified view of already-existing collections of bioinformatics software. Documentation for the BIRCH and locally-added programs is merged in a hierarchical set of web pages. In addition to manual pages for individual programs, BIRCH tutorials employ step by step examples, with screen shots and sample files, to illustrate both the important theoretical and practical considerations behind complex analytical tasks. CONCLUSION: BIRCH provides a versatile organizational framework for managing software and databases, and making these accessible to a user base. Because of its network-centric design, BIRCH makes it possible for any user to do any task from anywhere

    Self-tuning of disk input–output in operating systems

    Get PDF
    The final publication is available via http://dx.doi.org/10.1016/j.jss.2011.07.030One of the most difficult and hard to learn tasks in computer system management is tuning the kernel parameters in order to get the maximum performance. Traditionally, this tuning has been set using either fixed configurations or the subjective administrator's criteria. The main bottleneck among the subsystems managed by the operating systems is disk input/output (I/O). An evolutionary module has been developed to perform the tuning of this subsystem automatically, using an adaptive and dynamic approach. Any computer change, both at the hardware level, and due to the nature of the workload itself, will make our module adapt automatically and in a transparent way. Thus, system administrators are released from this kind of task and able to achieve some optimal performances adapted to the framework of each of their systems. The experiment made shows a productivity increase in 88.2% of cases and an average improvement of 29.63% with regard to the default configuration of the Linux operating system. A decrease of the average latency was achieved in 77.5% of cases and the mean decrease in the request processing time of I/O was 12.79%

    Database System Architecture for Fault tolerance and Disaster Recovery

    Get PDF
    Application systems being used today rely heavily on the availability of the database system. Disruption of database system can be damaging and catastrophic to the organization that depends on the availability of the database system for its business and service operations. To ensure business continuity under foreseeable and unforeseeable man-made or natural disasters, the database system has to be designed and built with fault tolerance and disaster recovery capabilities. This project explored existing technologies and solutions to design, build, and implement database system architecture for fault tolerance and disaster recovery using Oracle database software products. The project goal was to implement database system architecture for migrating multiple web applications and databases onto a consolidated system architecture providing high availability database application systems

    Evaluation of Job Queuing/Scheduling Software: Phase I Report

    Get PDF
    The recent proliferation of high performance work stations and the increased reliability of parallel systems have illustrated the need for robust job management systems to support parallel applications. To address this issue, the national Aerodynamic Simulation (NAS) supercomputer facility compiled a requirements checklist for job queuing/scheduling software. Next, NAS began an evaluation of the leading job management system (JMS) software packages against the checklist. This report describes the three-phase evaluation process, and presents the results of Phase 1: Capabilities versus Requirements. We show that JMS support for running parallel applications on clusters of workstations and parallel systems is still insufficient, even in the leading JMS's. However, by ranking each JMS evaluated against the requirements, we provide data that will be useful to other sites in selecting a JMS

    A formal process for the testing of servers

    Get PDF
    Not provided

    Computer software configuration management plan for the 241-AY and 241-AZ tank farm MICON automation system

    Full text link

    A new method of training computing lab assistants

    Get PDF
    This project developed a graduate level independent study course to be used in the training of graduate assistants working in the computer lab at the School of Library and Information Science at the University of North Carolina at Chapel Hill. The first section of this paper describes the reasons for proposing a change in training method. The course syllabus and other course related content make up the second section of the paper. The goal of the course is to train lab assistants more efficiently while they earn graduate level credit. The course content can be adapted to changes in technology, staffing, and other requirements and needs

    Multi-Media Mail in heterogeneous Networks

    Full text link
    The MIME approach seems to be the most reasonable effort for allowing the sending and receiving of multimedia messages using standard Internet mail transport facilities. Providing new header fields, such as MIME-Version, Content-Type, and Content- Transfer-Encoding, it is now possible to include various kinds of information types, e.g. audio, images, richtext, or video, into a RFC 822-conformant mail. Making use of these headers, it is possible to fully describe an attached body part, so that a receiving mail user agent is able to display it without any loss of information. Additionally, the definition of the "multipart" and "message" content types allows the creation of hierarchical structured mails, e.g. a message containing two alternative parts of information, one that can be shown using a simple ASCII-terminal, the other to be displayed on a multimedia workstation. Allowing the definition of bilaterally defined content types and providing a standardized means of establishing new content types prevent MIME from being a one-way road and supply mechanisms to extend MIME for future use

    Towards understanding and mitigating attacks leveraging zero-day exploits

    Get PDF
    Zero-day vulnerabilities are unknown and therefore not addressed with the result that they can be exploited by attackers to gain unauthorised system access. In order to understand and mitigate against attacks leveraging zero-days or unknown techniques, it is necessary to study the vulnerabilities, exploits and attacks that make use of them. In recent years there have been a number of leaks publishing such attacks using various methods to exploit vulnerabilities. This research seeks to understand what types of vulnerabilities exist, why and how these are exploited, and how to defend against such attacks by either mitigating the vulnerabilities or the method / process of exploiting them. By moving beyond merely remedying the vulnerabilities to defences that are able to prevent or detect the actions taken by attackers, the security of the information system will be better positioned to deal with future unknown threats. An interesting finding is how attackers exploit moving beyond the observable bounds to circumvent security defences, for example, compromising syslog servers, or going down to lower system rings to gain access. However, defenders can counter this by employing defences that are external to the system preventing attackers from disabling them or removing collected evidence after gaining system access. Attackers are able to defeat air-gaps via the leakage of electromagnetic radiation as well as misdirect attribution by planting false artefacts for forensic analysis and attacking from third party information systems. They analyse the methods of other attackers to learn new techniques. An example of this is the Umbrage project whereby malware is analysed to decide whether it should be implemented as a proof of concept. Another important finding is that attackers respect defence mechanisms such as: remote syslog (e.g. firewall), core dump files, database auditing, and Tripwire (e.g. SlyHeretic). These defences all have the potential to result in the attacker being discovered. Attackers must either negate the defence mechanism or find unprotected targets. Defenders can use technologies such as encryption to defend against interception and man-in-the-middle attacks. They can also employ honeytokens and honeypots to alarm misdirect, slow down and learn from attackers. By employing various tactics defenders are able to increase their chance of detecting and time to react to attacks, even those exploiting hitherto unknown vulnerabilities. To summarize the information presented in this thesis and to show the practical importance thereof, an examination is presented of the NSA's network intrusion of the SWIFT organisation. It shows that the firewalls were exploited with remote code execution zerodays. This attack has a striking parallel in the approach used in the recent VPNFilter malware. If nothing else, the leaks provide information to other actors on how to attack and what to avoid. However, by studying state actors, we can gain insight into what other actors with fewer resources can do in the future
    • …
    corecore