3,348 research outputs found

    Analysis of SSD’s Performance in Database Servers

    Get PDF
    Data storage is much needed in any type of device and there are multiple mechanisms for data storage which vary from the device to device but at the end it’s a magnetic drive which holds the data and stored in the form of digital format. One predominant data storage device is hard disk drive also called as HDD. Hard disk drives are used in a wide range of systems like computers, laptops and netbooks etc., it has magnetic platter which is used for reading and writing operations. (Hard disk drive, n.d.) With the emerging technologies and modularization of web application design architecture created a need for different kind of operating system and system architecture based on the functionality. If we want a server where files need to be placed it should be designed in such a way that it needs to be good at input and output operations (I/O). (How does a hard drive work?, 2018) If we want to store videos and stream, that server should be good at asynchronous streaming functionality. If we need to store the structured/un-structured data which can be pertained to any educational institution or an organization, we can use a database server to store this data in tables and it can be used. In general, we use hard disk drives to store any kind of data in all the servers, but there will be only changes in the system architecture. The concept of HDD utilization has been constant from past 20 years. There was a huge growth in the architectural design of operating systems used for hosting database servers, but when it comes to storage HDD’s have been used for many years. With the need for speed and faster operations from the perspective of storage, solid state drives come in to picture. (SSD Advantage, n.d.)They have a different kind of architecture when compared to HDD and they are called as SSD. This paper discusses the idea of using SSD’s instead of HDD’s in database servers. We created multiple database instances for SSD’s and HDD’s and also created multiple web applications using JAVA and connected to each of these database servers to access data via REST API’s. We have run multiple tests to compare the load time of all the different database instances and generated some visual analytics how it behaves when multiple/series of get operations made on the database with the REST API. This analysis will help in finding if there are any anomalies in the behavior with increase in throughput of read and write operations

    Enterprise Network Design and Implementation for Airports

    Get PDF
    The aim of this project was airports network design and implementation and the introduction of a suitable network for most airports around the world. The following project focused on three main parts: security, quality, and safety. The project has been provided with different utilities to introduce a network with a high security level for the airport. These utilities are hardware firewalls, an IP access control list, Mac address port security, a domain server and s proxy server. All of these utilities have been configured to provide a secure environment for the entire network and to prevent hackers from entering sensitive departments like the flight management and service providers departments. Improving the performance of any network requires a high quality of techniques and services which help to improve the general task of the network. The technical services that have been placed in the airport’s network are failover firewalls utility, a Pre-boot Execution Environment (PXE) server, a Dynamic Host Configuration Protocol (DHCP) server, a Domain Name System (DNS) server and a cabling system. These tools can increase the performance of the network in general and provide a stable internet service for the Air Traffic Control System by using dual internet service providers and the failover utility. The dual internet service providers’ role was to provide the flight management department, which helps to confirm the backup operation for the Air Traffic Control Complex (BATCX) system to outside the local network. This is achieved by using Windows servers backup (iSCSI initiators and iSCSI target) servers which helps to keep the Air Traffic Control systems’ information in a safe place. Also, for passengers’ personal information safety, the web server has been placed in the local network, which provides a secure environment for any network’s element

    The H.E.S.S. central data acquisition system

    Full text link
    The High Energy Stereoscopic System (H.E.S.S.) is a system of Imaging Atmospheric Cherenkov Telescopes (IACTs) located in the Khomas Highland in Namibia. It measures cosmic gamma rays of very high energies (VHE; >100 GeV) using the Earth's atmosphere as a calorimeter. The H.E.S.S. Array entered Phase II in September 2012 with the inauguration of a fifth telescope that is larger and more complex than the other four. This paper will give an overview of the current H.E.S.S. central data acquisition (DAQ) system with particular emphasis on the upgrades made to integrate the fifth telescope into the array. At first, the various requirements for the central DAQ are discussed then the general design principles employed to fulfil these requirements are described. Finally, the performance, stability and reliability of the H.E.S.S. central DAQ are presented. One of the major accomplishments is that less than 0.8% of observation time has been lost due to central DAQ problems since 2009.Comment: 17 pages, 8 figures, published in Astroparticle Physic

    Cost-effective HPC clustering for computer vision applications

    Get PDF
    We will present a cost-effective and flexible realization of high performance computing (HPC) clustering and its potential in solving computationally intensive problems in computer vision. The featured software foundation to support the parallel programming is the GNU parallel Knoppix package with message passing interface (MPI) based Octave, Python and C interface capabilities. The implementation is especially of interest in applications where the main objective is to reuse the existing hardware infrastructure and to maintain the overall budget cost. We will present the benchmark results and compare and contrast the performances of Octave and MATLAB

    A Business Continuity Solution for Telecommunications Billing Systems

    Get PDF
    The billing system is a critical component in a Telecommunications service provider\u27s suite of business support systems - without the billing system the provider cannot invoice their customers for services provided and therefore cannot generate revenue. Typically billing systems are hosted on a single large Unix/Oracle system located in the company\u27s data centre. Modern Unix servers with their redundant components and hot swap parts are highly resilient and can provide levels of availability when correctly installed in properly managed data centre with uninterruptible power supplies, cooling etc. High Availability clustering through the use of HP MC/ServiceGuard, Sun Cluster, IBM HACMP (High Availability Cluster Multi-Processing) or Oracle Clusterware/RAC (Real Application clusters) can bring this level of availability even higher. This approach however can only protect against the failure of a single server or component of the system, it cannot protect against the loss of an entire data centre in the event of a disaster such as a fire, flood or earthquake. In order to protect against such disasters it is necessary to provide some form of backup system on a site sufficiently remote from the primary site so that it would not be affected by any disaster, which might befall the primary site. This paper proposes a cost effective business continuity solution to protect a Telecommunications Billing system from the effects of unplanned downtime due to server or site outages. It is aimed at the smaller scale tier 2 and tier 3 providers such as Mobile Virtual Network Operators (MVNOs) and startup Competitive Local Exchange Carriers (CLECs) who are unlikely to have large established IT systems with business continuity features and for whom cost effectiveness is a key concern when implementing IT systems

    Ensuring system integrity and security on limited environment systems

    Get PDF
    Cyber security threats have rapidly developed in recent years and should also be considered when building or implementing systems that traditionally have not been connected to networks. More and more these systems are getting networked and controlled remotely, which widens their attack surface and lays them open to cyber threats. This means the systems should be able to detect and block malware threats without letting the controls affect daily operations. File integrity monitoring and protection could be one way to protect systems from emerging threats. The use case for this study is a computer system, that controls medical device. This kind of system does not necessarily have an internet connection and is not connected to a LAN network by default. Ensuring integrity on the system is critical as if the system would be infected by a malware, it could affect to the test results. This thesis studies what are the feasible ways to ensure system integrity on limited environment systems. Firstly these methods and tools are listed through a literature review. All of the tools are studied how they protect the system integrity. The literature review aims to select methods for further testing through a deductive reasoning. After selecting methods for testing, their implementations are installed to the testing environment. The methods are first tested for performance and then their detection and blocking capability is tested against real life threats. Finally, this thesis proposes a method which could be implemented to the presented use case. The proposal at the end is based on the conducted tests

    Orchestration of a large infrastructure of Remote Desktop Windows Servers

    Get PDF
    The CERN Windows Terminal Service infrastructure is an aggregation of multiple virtual servers running Remote Desktop Services, accessed by hundreds of users every day; it has two purposes: provide external access to the CERN network, and exercise access control to certain parts of the accelerator complex. Currently, the deployment and configuration of these servers and services requires some interaction by system administrators, although scripts and tools developed at CERN do contribute to alleviate the problem. Scaling up and down the infrastructure (i.e., adding or removing servers) is also an issue, since it’s done manually. However, recent changes in the infrastructure and the adoption of new software tools that automate software deployment and configuration open new possibilities to improve and orchestrate the current service. Automation and Orchestration will not only reduce the time and effort necessary to deploy new instances, but also simplify operations like patching, analysis and rebuilding of compromised nodes and will provide better performance in response to load increase. The goal of this CERN project, we’re now a part of, is to automate provisioning (and decommissioning) and scaling (up and down) of the infrastructure. Given the scope and magnitude of problems that must be solved, no single solution is capable of addressing all; therefore, multiple technologies are required. For deployment and configuration of Windows Server systems we resort to Puppet, while for orchestration tasks, Microsoft Service Management Automation will be used

    Developing a Virtual Appliance to Simulate Broken Networks

    Get PDF
    The objective of this thesis is to develop a virtual appliance that will simulate broken networks. A virtual appliance is a ready-to-use server that can be run on a virtualization platform. This thesis was commissioned by HowNetWorks Oy. HowNetWorks is a startup based in Oulu and Helsinki, Finland, developing tools to measure networks in ways that most tests do not. The objective of this appliance is therefore to simulate the network properties that HowNetWorks is testing, to help their development. The appliance, called hnwProxy, is created using infrastructure-as-code methodologies. This can be downloaded from GitHub at github.com/hownetworks/hnwproxy. Infrastructure-as-code is a new paradigm in infrastructure management, using practices from software engineering along with automation tools to create higher quality, more reliable and higher performing systems. The theoretical background of this thesis consists of network quality and infrastructure as code. Network quality describes what factors affect the quality of a connection, i.e. what we can simu-late with hnwProxy. For infrastructure-as-code, the book Infrastructure as Code by Kief Morris is used almost exclusively as reference. Network quality is more dispersed. There is no single book about the topic, the most used reference is Kurose & Ross’s Computer Networking: A Top-Down Approach. This covers computer networking quite exhaustively, but further references for more niche or loosely related topics are still needed. The result of this thesis is the virtual appliance, hnwProxy. This can simulate a broken network connection is several different ways and can run on a few different virtualization platforms, so it has met all requirements.OpinnĂ€ytetyön tavoite on kehittÀÀ sovellus, joka simuloi rikkinĂ€istĂ€ verkkoyhteyttĂ€. Aihe tuli toimeksiantajalta, HowNetWorks Oy:ltĂ€. HowNetWorks on suomalainen startup-yritys, joka on kehittĂ€mĂ€ssĂ€ verkon testaustyökaluja ominaisuuksille mitĂ€ perinteiset testit eivĂ€t testaa. Sovelluksen nimi on hnwProxy. KyseessĂ€ on virtual appliance, eli kĂ€yttövalmis virtuaalipalvelin jonka kuka tahansa saa vapaasti ladattua. TĂ€mĂ€ löytyy GitHub:sta osoitteesta github.com/hownetworks/hnwproxy. hnwProxy on kehitetty infrastruktuuri koodina -menetelmillĂ€ avoimen lĂ€hdekoodin lisenssillĂ€. Infrastruktuuri koodina on uusi ajattelutapa palvelininfrastruktuurin hallinnassa, jossa pyritÀÀn kĂ€yttĂ€mÀÀn automaatiotyökaluja ja ohjelmistokehityksestĂ€ tuttuja menetelmiĂ€ jĂ€rjestelmĂ€n laadun parantamiseksi. Teoriaosuudessa esitellÀÀn verkkojen laatutekijĂ€t sekĂ€ infrastruktuuri koodina. TĂ€ssĂ€ ensimmĂ€isessĂ€ luvussa pyritÀÀn selvittĂ€mÀÀn mitkĂ€ asiat vaikuttavat verkkoyhteyden laatuun, eli mitĂ€ hnwProxy tulee simuloimaan. Infrastruktuuri koodina -luvussa on kĂ€ytössĂ€ aihetta laajasti katta-va, Kief Morrisin kirjoittama, Infrastructure as Code kirja lĂ€hteenĂ€. Verkkojen LaatutekijĂ€t -luvussa kĂ€ytetÀÀn monta eri lĂ€hdettĂ€, nĂ€istĂ€ kattavin on Kurose & Rossin Computer Networking: A Top-Down Approach. Lopputulos on palvelinjĂ€rjestelmĂ€ joka tĂ€yttÀÀ projektin alussa asetetut vaatimukset. hnwProxy pystyy simuloimaan erilaisia rikkinĂ€isiĂ€ verkkoja, ja sitĂ€ voidaan kĂ€yttÀÀ useammassa eri virtuali-sointialustassa
    • 

    corecore