13,804 research outputs found
ALOJA: A framework for benchmarking and predictive analytics in Hadoop deployments
This article presents the ALOJA project and its analytics tools, which leverages machine learning to interpret Big Data benchmark performance data and tuning. ALOJA is part of a long-term collaboration between BSC and Microsoft to automate the characterization of cost-effectiveness on Big Data deployments, currently focusing on Hadoop. Hadoop presents a complex run-time environment, where costs and performance depend on a large number of configuration choices. The ALOJA project has created an open, vendor-neutral repository, featuring over 40,000 Hadoop job executions and their performance details. The repository is accompanied by a test-bed and tools to deploy and evaluate the cost-effectiveness of different hardware configurations, parameters and Cloud services. Despite early success within ALOJA, a comprehensive study requires automation of modeling procedures to allow an analysis of large and resource-constrained search spaces. The predictive analytics extension, ALOJA-ML, provides an automated system allowing knowledge discovery by modeling environments from observed executions. The resulting models can forecast execution behaviors, predicting execution times for new configurations and hardware choices. That also enables model-based anomaly detection or efficient benchmark guidance by prioritizing executions. In addition, the community can benefit from ALOJA data-sets and framework to improve the design and deployment of Big Data applications.This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement
No 639595). This work is partially supported by the Ministry of Economy of Spain under contracts TIN2012-34557 and 2014SGR1051.Peer ReviewedPostprint (published version
Smart technologies for effective reconfiguration: the FASTER approach
Current and future computing systems increasingly require that their functionality stays flexible after the system is operational, in order to cope with changing user requirements and improvements in system features, i.e. changing protocols and data-coding standards, evolving demands for support of different user applications, and newly emerging applications in communication, computing and consumer electronics. Therefore, extending the functionality and the lifetime of products requires the addition of new functionality to track and satisfy the customers needs and market and technology trends. Many contemporary products along with the software part incorporate hardware accelerators for reasons of performance and power efficiency. While adaptivity of software is straightforward, adaptation of the hardware to changing requirements constitutes a challenging problem requiring delicate solutions. The FASTER (Facilitating Analysis and Synthesis Technologies for Effective Reconfiguration) project aims at introducing a complete methodology to allow designers to easily implement a system specification on a platform which includes a general purpose processor combined with multiple accelerators running on an FPGA, taking as input a high-level description and fully exploiting, both at design time and at run time, the capabilities of partial dynamic reconfiguration. The goal is that for selected application domains, the FASTER toolchain will be able to reduce the design and verification time of complex reconfigurable systems providing additional novel verification features that are not available in existing tool flows
Motivation, Design, and Ubiquity: A Discussion of Research Ethics and Computer Science
Modern society is permeated with computers, and the software that controls
them can have latent, long-term, and immediate effects that reach far beyond
the actual users of these systems. This places researchers in Computer Science
and Software Engineering in a critical position of influence and
responsibility, more than any other field because computer systems are vital
research tools for other disciplines. This essay presents several key ethical
concerns and responsibilities relating to research in computing. The goal is to
promote awareness and discussion of ethical issues among computer science
researchers. A hypothetical case study is provided, along with questions for
reflection and discussion.Comment: Written as central essay for the Computer Science module of the
LANGURE model curriculum in Research Ethic
Recommended from our members
Building fault detection and diagnostics: Achieved savings, and methods to evaluate algorithm performance
Fault detection and diagnosis (FDD) represents one of the most active areas of research and commercial product development in the buildings industry. This paper addresses two questions concerning FDD implementation and advancement 1) What are today's users of FDD saving and spending on the technology? 2) What methods and datasets can be used to evaluate and benchmark FDD algorithm performance? Relevant to the first question, 26 organizations that use FDD across a total 550 buildings and 97 M sf achieved median savings of 8%. Twenty-seven FDD users reported that the median base cost for FDD software, annual recurring software cost, and annual labor cost were 2.7 and $8 per monitoring point, with a median implementation size of approximately 1300 points. To address the second question, this paper describes a systematic methodology for evaluating the performance of FDD algorithms, curates an initial test dataset of air handling unit (AHU) system faults, and completes a trial to demonstrate the evaluation process on three sample FDD algorithms. The work provided a first step toward a standard evaluation of different FDD technologies. It showed the test methodology is indeed scalable and repeatable, provided an understanding of the types of insights that can be gained from algorithm performance testing, and highlighted the priorities for further expanding the test dataset
USBcat - Towards an Intrusion Surveillance Toolset
This paper identifies an intrusion surveillance framework which provides an
analyst with the ability to investigate and monitor cyber-attacks in a covert
manner. Where cyber-attacks are perpetrated for the purposes of espionage the
ability to understand an adversary's techniques and objectives are an important
element in network and computer security. With the appropriate toolset,
security investigators would be permitted to perform both live and stealthy
counter-intelligence operations by observing the behaviour and communications
of the intruder. Subsequently a more complete picture of the attacker's
identity, objectives, capabilities, and infiltration could be formulated than
is possible with present technologies. This research focused on developing an
extensible framework to permit the covert investigation of malware.
Additionally, a Universal Serial Bus (USB) Mass Storage Device (MSD) based
covert channel was designed to enable remote command and control of the
framework. The work was validated through the design, implementation and
testing of a toolset.Comment: In Proceedings AIDP 2014, arXiv:1410.322
- …