43 research outputs found

    High performance communication subsystem for clustering standard high-volume servers using Gigabit Ethernet

    Get PDF
    This paper presents an efficient communication subsystem, DP-II, for clustering standard high-volume (SHV) servers using Gigabit Ethernet. The DP-II employs several lightweight messaging mechanisms to achieve low-latency and high-bandwidth communication. The test shows an 18.32 us single-trip latency and 72.8 MB/s bandwidth on a Gigabit Ethernet network for connecting two Dell PowerEdge 6300 Quad Xeon SMP servers running Linux. To improve the programmability of the DP-II communication subsystem, the development of DP-II was based on a concise yet powerful abstract communication model, Directed Point Model, which can be conveniently used to depict the inter-process communication pattern of a parallel task in the cluster environment. In addition, the API of DP-II preserves the syntax and semantics of traditional UNIX I/O operations, which make it easy to use.published_or_final_versio

    Profiling and Identification of Web Applications in Computer Network

    Get PDF
    Characterising network traffic is a critical step for detecting network intrusion or misuse. The traditional way to identify the application associated with a set of traffic flows uses port number and DPI (Deep Packet Inspection), but it is affected by the use of dynamic ports and encryption. The research community proposed models for traffic classification that determined the most important requirements and recommendations for a successful approach. The suggested alternatives could be categorised into four techniques: port-based, packet payload based, host behavioural, and statistical-based. The traditional way to identifying traffic flows typically focuses on using IANA assigned port numbers and deep packet inspection (DPI). However, an increasing number of Internet applications nowadays that frequently use dynamic post assignments and encryption data traffic render these techniques in achieving real-time traffic identification. In recent years, two other techniques have been introduced, focusing on host behaviour and statistical methods, to avoid these limitations. The former technique is based on the idea that hosts generate different communication patterns at the transport layer; by extracting these behavioural patterns, activities and applications can be classified. However, it cannot correctly identify the application names, classifying both Yahoo and Gmail as email. Thereby, studies have focused on using statistical features approach for identifying traffic associated with applications based on machine learning algorithms. This method relies on characteristics of IP flows, minimising the overhead limitations associated with other schemes. Classification accuracy of statistical flow-based approaches, however, depends on the discrimination ability of the traffic features used. NetFlow represents the de-facto standard in monitoring and analysing network traffic, but the information it provides is not enough to describe the application behaviour. The primary challenge is to describe the activity within entirely and among network flows to understand application usage and user behaviour. This thesis proposes novel features to describe precisely a web application behaviour in order to segregate various user activities. Extracting the most discriminative features, which characterise web applications, is a key to gain higher accuracy without being biased by either users or network circumstances. This work investigates novel and superior features that characterize a behaviour of an application based on timing of arrival packets and flows. As part of describing the application behaviour, the research considered the on/off data transfer, defining characteristics for many typical applications, and the amount of data transferred or exchanged. Furthermore, the research considered timing and patterns for user events as part of a network application session. Using an extended set of traffic features output from traffic captures, a supervised machine learning classifier was developed. To this effect, the present work customised the popular tcptrace utility to generate classification features based on traffic burstiness and periods of inactivity for everyday Internet usage. A C5.0 decision tree classifier is applied using the proposed features for eleven different Internet applications, generated by ten users. Overall, the newly proposed features reported a significant level of accuracy (~98%) in classifying the respective applications. Afterwards, uncontrolled data collected from a real environment for a group of 20 users while accessing different applications was used to evaluate the proposed features. The evaluation tests indicated that the method has an accuracy of 87% in identifying the correct network application.Iraqi cultural Attach

    Development of an On-line Radiation Detection and Measurements Laboratory Course

    Get PDF
    An on-line radiation detection and measurements lab is being developed with a grant from the U.S. Nuclear Regulatory Commission. The on-line laboratory experiments are designed to provide a realistic laboratory experience and will be offered to students at colleges/universities where such a course is not offered. This thesis presents four web-based experiments: 1) nuclear electronics, 2) gamma-ray spectroscopy with scintillation detectors, 3) external gamma-ray dosimetry and gamma attenuation in matter, and 4) alpha spectroscopy and absorption in matter. The students access the experiments through a broad-band internet connection. Computer-controlled instrumentation developed in National Instruments (NI) LabVIEWTM communicates with the URSA-II (SE International, Inc.) data acquisition system, which controls the detector bias voltage, pulse shaping, amplifier gain, and ADC. Detector and amplifier output pulses can be displayed with other instrumentation developed in LabVIEWTM for the digital oscilloscope (USB-5132, NI). Additional instrumentation developed in LabVIEWTM is used to control the positions of all sources with stepper motor controllers (VXM-1, Velmex, Inc.) and adjust pressure in the alpha chamber with a digital vacuum regulator (DVR-200, J-KEM, Inc.). Unique interactive interfaces are created by integrating all of the necessary instrumentation to conduct each lab. These interfaces provide students with seamless functionality for data acquisition control, experimental control, and live data display with real-time updates for each experiment. A webcam is set up to stream the experiment live so the student can observe the physical instruments and receive visual feedback from the system in real time

    Analyzable dataflow executions with adaptive redundancy

    Get PDF
    Increasing performance requirements in the embedded systems domain have encouraged a drift from singlecore to multicore processors, and thus multicore processors are widely used in embedded systems today. Cars are an example for complex embedded systems in which the use of multicore processors is continuously increasing. A major reason for this is to consolidate different software components on one chip and thus reduce the number of electronic control units. However, the de facto standard in the automotive industry, AUTOSAR (AUTomotive Open System ARchitecture), was originally designed for singlecore processors. Although basic support for multicore processors was added, more complex architectures are currently not compatible with the software stack. Regarding the software components running on the ECUS of modern cars, requirements are diverse. On the one hand, there are safety-critical tasks, like the airbag control, anti-lock braking system, electronic stability control and emergency brake assist, and on the other hand, tasks which do not have any safety-related requirements at all, for example tasks controlling the infotainment system. Trends like autonomous driving lead to even more demanding tasks in the system since such tasks are both safety-critical and data-intensive. As embedded applications, like those in the automotive domain, become more complex, new approaches are necessary. Data-intensive tasks are usually tackled with large-scale computing frameworks. In this thesis, some major concepts of such frameworks are transferred to the high-performance embedded systems domain. For this purpose, the thesis describes a runtime environment (RTE) that is suitable for different kinds of multi- and manycore hardware architectures. The RTE follows a dataflow execution model based on directed acyclic graphs (DAGs). Graphs are divided into sections which are scheduled separately. For each section, the RTE uses a DAG scheduling heuristic to compute multiple schedules covering different redundancy configurations. This allows the RTE to dynamically change the redundancy of parts of the graph at runtime despite the use of fixed schedules. Alternatively, the RTE also provides an online scheduler. To specify suitable graphs, the RTE also provides a programming model which shares similarities with common large-scale computing frameworks, for example Apache Spark. Using this programming model, three common distributed algorithms, namely Cannon's algorithm, the Cooley-Tukey algorithm and bitonic sort, were implemented. With these three programs, the performance of the RTE was evaluated for a variety of configurations on two different hardware architectures. The results show that the proposed RTE is able to reach the performance of established parallel computation frameworks and that for suitable graphs with reasonable sectionings the negative influence on the runtime is either small or non-existent.Aufgrund steigender Anforderungen an die Leistungsfähigkeit von eingebetteten Systemen finden Mehrkernprozessoren mittlerweile auch in eingebetteten Systemen Verwendung. Autos sind ein Beispiel für eingebettete Systeme, in denen die Verbreitung von Mehrkernprozessoren kontinuierlich zunimmt. Ein Hauptgrund ist, dass es dadurch möglich wird, mehrere Applikationen, für die ursprünglich mehrere Electronic Control Units (ECUs) notwendig waren, auf ein und demselben Chip auszuführen und dadurch die Anzahl der ECUs im Gesamtsystem zu verringern. Der De-facto-Standard AUTOSAR (AUTomotive Open System ARchitecture) wurde jedoch ursprünglich nur im Hinblick auf Einkernprozessoren entworfen und, obwohl der Softwarestack um grundlegende Unterstützung für Mehrkernprozessoren erweitert wurde, sind komplexere Architekturen nicht damit kompatibel. Die Anforderungen der Softwarekomponenten von modernen Autos sind vielfältig. Einerseits gibt es hochgradig sicherheitskritische Tasks, die beispielsweise die Airbags, das Antiblockiersystem, die Fahrdynamikregelung oder den Notbremsassistenten steuern und andererseits Tasks, die keinerlei sicherheitskritische Anforderungen aufweisen, wie zum Beispiel Tasks zur Steuerung des Infotainment-Systems. Neue Trends wie autonomes Fahren führen zu weiteren anspruchsvollen Tasks, die sowohl hohe Leistungs- als auch Sicherheitsanforderungen aufweisen. Da die Komplexität eingebetteter Anwendungen, beispielsweise im Automobilbereich, stetig zunimmt, sind neue Ansätze erforderlich. Für komplexe, datenintensive Aufgaben werden in der Regel Cluster-Computing-Frameworks eingesetzt. In dieser Arbeit werden Konzepte solcher Frameworks auf den Bereich der eingebetteten Systeme übertragen. Dazu beschreibt die Arbeit eine Laufzeitumgebung (RTE) für eingebettete Mehrkernarchitekturen. Die RTE folgt einem Datenfluss-Ausführungsmodell, das auf gerichteten azyklischen Graphen basiert. Graphen können in Abschnitte eingeteilt werden, für welche separat mehrere unterschiedlich redundante Schedules mit Hilfe einer Scheduling-Heuristik berechnet werden. Dieser Ansatz erlaubt es, die Redundanz von Teilen der Anwendung zur Laufzeit zu verändern. Alternativ unterstützt die RTE auch Scheduling zur Laufzeit. Zur Erzeugung von Graphen stellt die RTE ein Programmiermodell bereit, welches sich an etablierten Frameworks, insbesondere Apache Spark, orientiert. Damit wurden drei Beispielanwendungen implementiert, die auf gängigen Algorithmen basieren. Konkret handelt es sich um Cannon's Algorithmus, den Cooley-Tukey-Algorithmus und bitonisches Sortieren. Um die Leistungsfähigkeit der RTE zu ermitteln, wurden diese drei Anwendungen mehrfach mit verschiedenen Konfigurationen auf zwei Hardware-Architekturen ausgeführt. Die Ergebnisse zeigen, dass die RTE in ihrer Leistungsfähigkeit mit etablierten Systemen vergleichbar ist und die Laufzeit bei einer sinnvollen Graphaufteilung im besten Fall nur geringfügig beeinflusst wird

    Accessing Space: A Catalogue of Process, Equipment and Resources for Commercial Users, 1990

    Get PDF
    A catalogue is presented which is intended for commercial developers who are considering, or who have in progress, a project involving the microgravity environment of space or remote sensing of the Earth. An orientation is given to commercial space activities along with a current inventory of equipment, apparatus, carriers, vehicles, resources, and services available from NASA, other government agencies and U.S. industry. The information describes the array of resources that commercial users should consider when planning ground or space based developments. Many items listed have flown in space or been tested in labs and aboard aircraft and can be reused, revitalized, or adapted to suit specific requirements. New commercial ventures are encouraged to exploit existing inventory and expertise to the greatest extent possible

    A neural network and rule based system application in water demand forecasting

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and was awarded by Brunel University.This thesis describes a short term water demand forecasting application that is based upon a combination of a neural network forecast generator and a rule based system that modifies the resulting forecasts. Conventionally, short term forecasting of both water consumption and electrical load demand has been based upon mathematical models that aim to either extract the mathematical properties displayed by a time series of historical data, or represent the causal relationships between the level of demand and the key factors that determine that demand. These conventional approaches have been able to achieve acceptable levels of prediction accuracy for those days where distorting, non cyclic influences are not present to a significant degree. However, when such distortions are present, then the resultant decrease in prediction accuracy has a detrimental effect upon the controlling systems that are attempting to optimise the operation of the water or electricity supply network. The abnormal, non cyclic factors can be divided into those which are related to changes in the supply network itself, those that are related to particular dates or times of the year and those which are related to the prevailing meteorological conditions. If a prediction system is to provide consistently accurate forecasts then it has to be able to incorporate the effects of each of the factor types outlined above. The prediction system proposed in this thesis achieves this by the use of a neural network that by the application of appropriately classified example sets, can track the varying relationship between the level of demand and key meteorological variables. The influence of supply network changes and calendar related events are accounted for by the use of a rule base of prediction adjusting rules that are built up with reference to past occurrences of similar events. The resulting system is capable of eliminating a significant proportion of the large prediction errors that can lead to non optimal supply network operation

    The development of a methodology for a tool for rapid assessment of indoor environment quality in office buildings in the UK

    Get PDF
    This thesis describes a methodology for the development of a novel tool for rapid assessment of Indoor Environment Quality (IEQ) in office buildings in the UK. The tool uses design, measured, calculated and surveyed data as input for IEQ calculations. The development of such a tool has become a necessity especially in the developed world where legally binding targets for Green House Gas (GHG) emissions have been agreed and where buildings are required by law to display energy performance certification. The novelty of this tool is that it addresses the need to present an indoor environment performance rating that can be presented alongside energy performance certification since the energy performance of office buildings depends significantly on the criteria used for the indoor environment. The tool, called the IEQAT (Indoor Environment Quality Assessment Tool), is based on the IEQ model which was developed from literature review. The IEQ model is based on the IEQ index which was derived from contributing factors or sub indices that include Thermal Comfort, Indoor Air quality (IAQ), Acoustic Comfort and Lighting. The model was tested by studying the responses of occupants of three office buildings in the UK. Their subjective responses which were collected via a questionnaire were compared against model simulation results which were calculated using physical measurements of IEQ variables such as air temperature, illuminance (lux), background noise levels (dBA), relative humidity, carbon dioxide concentration (ppm), and air velocity. By fitting a multivariate regression model to questionnaire data, a weighted ranking of parameters affecting IEQ was produced and new provisional weightings for the IEQ model, which is more relevant to the UK situation, were derived
    corecore