344,216 research outputs found

    Two research contributions in 64-bit computing: Testing and Applications

    No full text
    Following the release of Windows 64-bit and Redhat Linux 64-bit operating systems (OS) in late April 2005, this is the one of the first 64-bit OS research project completed in a British university. The objective is to investigate (1) the increase/decrease in performance compared to 32-bit computing; (2) the techniques used to develop 64-bit applications; and (3) how 64-bit computing should be used in IT and research organizations to improve their work. This paper summarizes research discoveries for this investigation, including two major research contributions in (1) testing and (2) application development. The first contribution includes performance, stress, application, multiplatform, JDK and compatibility testing for AMD and Intel models. Comprehensive testing results reveal that 64-bit computing has a better performance in application performance, system performance and stress testing, but a worse performance in compatibility testing than the traditional 32-bit computing. A 64-bit dual-core processor has been tested and the results show that it performs better than a 64-bit single-core processor, but only in application that requires very high demands of CPU and memory consumption. The second contribution is .NET 1.1 64-bit implementations. Without additional troubleshooting, .NET 1.1 does not work on 64-bit Windows operating systems in stable ways. After stabilizing .NET environment, the next step is the application development, which is a dynamic repository with functions such as registration, download, login-logout, product submissions, database storage and statistical reports. The technology is based on Visual Studio .NET 2003, .NET 1.1 Framework with Service Pack 1, SQL Server 2000 with Service Pack 4 and IIS Server 6.0 on the Windows Server 2003 Enterprise x64 platform with Service Pack 1

    A Multi-objective Perspective for Operator Scheduling using Fine-grained DVS Architecture

    Full text link
    The stringent power budget of fine grained power managed digital integrated circuits have driven chip designers to optimize power at the cost of area and delay, which were the traditional cost criteria for circuit optimization. The emerging scenario motivates us to revisit the classical operator scheduling problem under the availability of DVFS enabled functional units that can trade-off cycles with power. We study the design space defined due to this trade-off and present a branch-and-bound(B/B) algorithm to explore this state space and report the pareto-optimal front with respect to area and power. The scheduling also aims at maximum resource sharing and is able to attain sufficient area and power gains for complex benchmarks when timing constraints are relaxed by sufficient amount. Experimental results show that the algorithm that operates without any user constraint(area/power) is able to solve the problem for most available benchmarks, and the use of power budget or area budget constraints leads to significant performance gain.Comment: 18 pages, 6 figures, International journal of VLSI design & Communication Systems (VLSICS

    Data mining based cyber-attack detection

    Get PDF

    Integrated Modeling and Verification of Real-Time Systems through Multiple Paradigms

    Get PDF
    Complex systems typically have many different parts and facets, with different characteristics. In a multi-paradigm approach to modeling, formalisms with different natures are used in combination to describe complementary parts and aspects of the system. This can have a beneficial impact on the modeling activity, as different paradigms an be better suited to describe different aspects of the system. While each paradigm provides a different view on the many facets of the system, it is of paramount importance that a coherent comprehensive model emerges from the combination of the various partial descriptions. In this paper we present a technique to model different aspects of the same system with different formalisms, while keeping the various models tightly integrated with one another. In addition, our approach leverages the flexibility provided by a bounded satisfiability checker to encode the verification problem of the integrated model in the propositional satisfiability (SAT) problem; this allows users to carry out formal verification activities both on the whole model and on parts thereof. The effectiveness of the approach is illustrated through the example of a monitoring system.Comment: 27 page

    High Energy Physics Forum for Computational Excellence: Working Group Reports (I. Applications Software II. Software Libraries and Tools III. Systems)

    Full text link
    Computing plays an essential role in all aspects of high energy physics. As computational technology evolves rapidly in new directions, and data throughput and volume continue to follow a steep trend-line, it is important for the HEP community to develop an effective response to a series of expected challenges. In order to help shape the desired response, the HEP Forum for Computational Excellence (HEP-FCE) initiated a roadmap planning activity with two key overlapping drivers -- 1) software effectiveness, and 2) infrastructure and expertise advancement. The HEP-FCE formed three working groups, 1) Applications Software, 2) Software Libraries and Tools, and 3) Systems (including systems software), to provide an overview of the current status of HEP computing and to present findings and opportunities for the desired HEP computational roadmap. The final versions of the reports are combined in this document, and are presented along with introductory material.Comment: 72 page
    • 

    corecore