53 research outputs found

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    A lightweight interface to local Grid scheduling systems

    Get PDF
    Many complex research problems require an immense amount of computational power to solve. In order to solve such problems, the concept of the computational Grid was conceived. Although Grid technology is hailed as the next great enabling technology in Computer Science, the last being the inception of the World Wide Web, some concerns have to be addressed if this technology is going to be successful. The main difference between the Web and the Grid in terms of adoption is usability. The Web was designed with both functionality and end-users in mind, whereas the Grid has been designed solely with functionality in mind. Although large Grid installations are operational around the globe, their use is restricted to those who have an in-depth knowledge of its complex architecture and functionality. Such technology is therefore out of reach for the very scientists who need these resources because of its sheer complexity. The Grid is likely to succeed as a tool for some large-scale problem solving as there is no alternative on a similar scale. However, in order to integrate such systems into our daily lives, just as the Web has been, such systems need to be accessible to ``novice'' users. Without such accessibility, the use and growth of such systems will remain constrained. This dissertation details one possible way of making the Grid more accessible, by providing high-level access to the scheduling systems on which Grids rely. Since ``the Grid'' is a mechanism of transferring control of user submitted jobs to third-party scheduling systems, high-level access to the schedulers themselves was deemed to be a natural place to begin usability enhancing efforts. In order to design a highly usable and intuitive interface to a Grid scheduling system, a series of interviews with scientists were conducted in order to gain insight into the way in which supercomputing systems are utilised. Once this data was gathered, a paper-based prototype system was developed. This prototype was then evaluated by a group of test subjects who set out to criticise the interface and make suggestions as to where it could be improved. Based on this new data, the final prototype was developed firstly on paper and then implemented in software. The implementation makes use of lightweight Web 2.0 technologies. Designing lightweight software allows one to make use of the dynamic properties of Web technologies and thereby create more usable interfaces that are also visually appealing. Finally, the system was once again evaluated by another group of test subjects. In addition to user evaluations, performance experiments and real-world case studies were carried out on the interface. This research concluded that a dynamic Web 2.0-inspired interface appeals to a large group of users and allows for greater flexibility in the way in which data, in this case technical data, is presented. In terms of usability- the focal point of this research- it was found that it is possible to build an interface to a Grid scheduling system that can be used by users with no technical Grid knowledge. This is a significant outcome, as users were able to submit jobs to a Grid without fully comprehending the complexities involved with such actions, yet understanding the task they were required to perform. Finally, it was found that the use of a lightweight approach in terms of bandwidth usage and response time is superior to the traditional HTML-only approach. In this particular implementation of the interface, the benefits of using a lightweight approach are realised approximately halfway through a typical Grid job submission cycle

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    Scripts in a Frame: A Framework for Archiving Deferred Representations

    Get PDF
    Web archives provide a view of the Web as seen by Web crawlers. Because of rapid advancements and adoption of client-side technologies like JavaScript and Ajax, coupled with the inability of crawlers to execute these technologies effectively, Web resources become harder to archive as they become more interactive. At Web scale, we cannot capture client-side representations using the current state-of-the art toolsets because of the migration from Web pages to Web applications. Web applications increasingly rely on JavaScript and other client-side programming languages to load embedded resources and change client-side state. We demonstrate that Web crawlers and other automatic archival tools are unable to archive the resulting JavaScript-dependent representations (what we term deferred representations), resulting in missing or incorrect content in the archives and the general inability to replay the archived resource as it existed at the time of capture. Building on prior studies on Web archiving, client-side monitoring of events and embedded resources, and studies of the Web, we establish an understanding of the trends contributing to the increasing unarchivability of deferred representations. We show that JavaScript leads to lower-quality mementos (archived Web resources) due to the archival difficulties it introduces. We measure the historical impact of JavaScript on mementos, demonstrating that the increased adoption of JavaScript and Ajax correlates with the increase in missing embedded resources. To measure memento and archive quality, we propose and evaluate a metric to assess memento quality closer to Web users’ perception. We propose a two-tiered crawling approach that enables crawlers to capture embedded resources dependent upon JavaScript. Measuring the performance benefits between crawl approaches, we propose a classification method that mitigates the performance impacts of the two-tiered crawling approach, and we measure the frontier size improvements observed with the two-tiered approach. Using the two-tiered crawling approach, we measure the number of client-side states associated with each URI-R and propose a mechanism for storing the mementos of deferred representations. In short, this dissertation details a body of work that explores the following: why JavaScript and deferred representations are difficult to archive (establishing the term deferred representation to describe JavaScript dependent representations); the extent to which JavaScript impacts archivability along with its impact on current archival tools; a metric for measuring the quality of mementos, which we use to describe the impact of JavaScript on archival quality; the performance trade-offs between traditional archival tools and technologies that better archive JavaScript; and a two-tiered crawling approach for discovering and archiving currently unarchivable descendants (representations generated by client-side user events) of deferred representations to mitigate the impact of JavaScript on our archives. In summary, what we archive is increasingly different from what we as interactive users experience. Using the approaches detailed in this dissertation, archives can create mementos closer to what users experience rather than archiving the crawlers’ experiences on the Web

    Threat Modeling Solution for Internet of Things in a Web­based Security Framework

    Get PDF
    The Internet of Things (IoT) is a growing paradigm that provides daily life benefits for its users, motivating a fast paced deployment of IoT devices in sensitive scenarios. However, current IoT devices do not correctly apply or integrate security controls or technology, potentially leading to a wide panoply of problems, most of them with harmful impact to the user. Thus, this work proposes the development of a tool that helps developers create properly secure IoT devices by identifying possible weaknesses in the system. This tool consists of a module of a framework, denominated Security Advising Modules (SAM) in the scope of this work, and achieves the referred objective by identifying possible weaknesses found in the software and hardware of IoT devices. To define the weaknesses, a set of databases containing information about vulnerabilities and weaknesses found in a system were investigated throughout this project, and a restricted set of weaknesses to be presented was chosen. Since some databases contain hundreds of thousands of vulnerabilities, it was neither feasible nor pertinent to present them completely in the developed tool. Additionally, the questions to retrieve system information were identified in this work, allowing us to map the chosen weaknesses to the answers given by the developer to those questions. The tool developed was properly tested by running automated tests, with the Selenium framework, and also validated by security experts and evaluated by a set of 18 users. Finally, based on user feedback, it was concluded that the developed tool was useful, simple and straightforward to use, and that 89% of respondents had never interacted with a similar tool (adding, in this way, to the innovative character).A Internet das Coisas (do inglês Internet of Things, IoT) é um paradigma em acentuado crescimento com benefícios inegáveis para o dia a dia dos utilizadores, com uma elevada aplicação dos dispositivos da IoT em cenários sensíveis. No entanto, atualmente os dispositivos da IoT não garantem corretamente as propriedades de segurança, o que pode levar a toda uma panóplia de problemas, muitos com impacto no utilizador. Este trabalho propõe o desenvolvimento de uma ferramenta que auxilie os programadores a criar dispositivos da IoT seguros. A ferramenta é um módulo de uma framework denominada Security Advising Modules (SAM), e procura atingir o referido objetivo através da identificação de fraquezas que possam existir no software ou hardware dos dispositivos IoT. Com o objetivo de delinear as fraquezas, consultou­se ao longo deste projeto um conjunto de bases de dados que contêm informações sobre vulnerabilidades e fraquezas encontradas em sistemas, do qual se escolheram um conjunto restrito de fraquezas a apresentar. A escolha deste conjunto deve­se a algumas das bases de dados consultadas conterem centenas de milhares de vulnerabilidades, pelo que não é exequível nem pertinente a sua completa apresentação na nossa ferramenta. Complementarmente, identificaramse neste trabalho as questões que permitem obter informações sobre o sistema em desenvolvimento que depois nos permitem mapear as fraquezas em função das respostas do programador. A ferramenta desenvolvida foi devidamente testada através da execução de testes automáticos, com a framework Selenium, e também validada por especialistas de segurança e avaliada por um conjunto de 18 utilizadores. Por fim, com base no feedback dos utilizadores, concluiu­se que a ferramenta desenvolvida era útil, de utilização simples e direta, e que 89% dos inquiridos nunca tinham interagido com uma ferramenta similar (nesse sentido inovadora).The work described in this dissertation was carried out at the Instituto de Telecomunicações, Multimedia Signal Processing ­ Cv Laboratory, in Universidade da Beira Interior, at Covilhã, Portugal. This research work was funded by the S E C U R I o T E S I G N Project through FCT/COMPETE/FEDER under Reference Number POCI­01­0145­FEDER­030657 and by Fundação para Ciência e Tecnologia (FCT) research grant with reference BIL/ Nº12/2019­B00702

    QoE on media deliveriy in 5G environments

    Get PDF
    231 p.5G expandirá las redes móviles con un mayor ancho de banda, menor latencia y la capacidad de proveer conectividad de forma masiva y sin fallos. Los usuarios de servicios multimedia esperan una experiencia de reproducción multimedia fluida que se adapte de forma dinámica a los intereses del usuario y a su contexto de movilidad. Sin embargo, la red, adoptando una posición neutral, no ayuda a fortalecer los parámetros que inciden en la calidad de experiencia. En consecuencia, las soluciones diseñadas para realizar un envío de tráfico multimedia de forma dinámica y eficiente cobran un especial interés. Para mejorar la calidad de la experiencia de servicios multimedia en entornos 5G la investigación llevada a cabo en esta tesis ha diseñado un sistema múltiple, basado en cuatro contribuciones.El primer mecanismo, SaW, crea una granja elástica de recursos de computación que ejecutan tareas de análisis multimedia. Los resultados confirman la competitividad de este enfoque respecto a granjas de servidores. El segundo mecanismo, LAMB-DASH, elige la calidad en el reproductor multimedia con un diseño que requiere una baja complejidad de procesamiento. Las pruebas concluyen su habilidad para mejorar la estabilidad, consistencia y uniformidad de la calidad de experiencia entre los clientes que comparten una celda de red. El tercer mecanismo, MEC4FAIR, explota las capacidades 5G de analizar métricas del envío de los diferentes flujos. Los resultados muestran cómo habilita al servicio a coordinar a los diferentes clientes en la celda para mejorar la calidad del servicio. El cuarto mecanismo, CogNet, sirve para provisionar recursos de red y configurar una topología capaz de conmutar una demanda estimada y garantizar unas cotas de calidad del servicio. En este caso, los resultados arrojan una mayor precisión cuando la demanda de un servicio es mayor

    Fault Management For Service-Oriented Systems

    Get PDF
    Service Oriented Architectures (SOAs) enable the automatic creation of business applications from independently developed and deployed Web services. As Web services are inherently unreliable, how to deliver reliable Web services composition over unreliable Web services is a significant and challenging problem. The process requires monitoring the system\u27s behavior, determining when and why faults occur, and then applying fault prevention/recovery mechanisms to minimize the impact and/or recover from these faults. However, it is hard to apply a non-distributed management approach to SOA, since a manager needs to communicate with the different components through authentications. In SOA, a business process can terminate successfully if all services finish their work correctly through providing alternative plans in case of fault. However, the business process itself may encounter different faults because the fault may occur anywhere at any time due to SOA specifications. In this work, we propose new fault management technique (FLEX) and we identify several improvements over existing techniques. First, existing techniques rely mainly on static information while FLEX is based on dynamic information. Second, existing frameworks use a limited number of attributes; while we use all possible attributes by identify them as either required or optional. Third, FLEX reduces the comparison cost (time and space) by filtering out services at each level needed for evaluation. In general, FLEX is divided into two phases: Phase I, computes service reliability and utility, while in Phase II, runtime planning and evaluation. In Phase I, we assess the fault likelihood of the service using a combination of techniques (e.g., Hidden Marcov Model, Reputation, and Clustering). In Phase II, we build a recovery plan to execute in case of fault(s) and we calculate the overall system reliability based on the fault occurrence likelihoods assessed for all the services that are part of the current composition. FLEX is novel because it relies on key activities of the autonomic control loop (i.e., collect, analyze, decide, plan, and execute) to support dynamic management based on the changes of user requirements and QoS level. Our technique dynamically evaluates the performance of Web services based on their previous history and user requirements, assess the likelihood of fault occurrence, and uses the result to create (multiple) recovery plans. Moreover, we define a method to assess the overall system reliability by evaluating the performance of individual recovery plans, when invoked together. The Experiment results show that our technique improves the service selection quality by selecting the services with the highest score and improves the overall system performance in comparison with existing works. In the future, we plan to investigate techniques for monitoring service oriented systems and assess the online negotiation possibilities for combining different services to create high performance systems

    Cyber Security

    Get PDF
    This open access book constitutes the refereed proceedings of the 18th China Annual Conference on Cyber Security, CNCERT 2022, held in Beijing, China, in August 2022. The 17 papers presented were carefully reviewed and selected from 64 submissions. The papers are organized according to the following topical sections: ​​data security; anomaly detection; cryptocurrency; information security; vulnerabilities; mobile internet; threat intelligence; text recognition

    Recent Advances in Social Data and Artificial Intelligence 2019

    Get PDF
    The importance and usefulness of subjects and topics involving social data and artificial intelligence are becoming widely recognized. This book contains invited review, expository, and original research articles dealing with, and presenting state-of-the-art accounts pf, the recent advances in the subjects of social data and artificial intelligence, and potentially their links to Cyberspace

    Web service composition: A survey of techniques and tools

    Get PDF
    Web services are a consolidated reality of the modern Web with tremendous, increasing impact on everyday computing tasks. They turned the Web into the largest, most accepted, and most vivid distributed computing platform ever. Yet, the use and integration of Web services into composite services or applications, which is a highly sensible and conceptually non-trivial task, is still not unleashing its full magnitude of power. A consolidated analysis framework that advances the fundamental understanding of Web service composition building blocks in terms of concepts, models, languages, productivity support techniques, and tools is required. This framework is necessary to enable effective exploration, understanding, assessing, comparing, and selecting service composition models, languages, techniques, platforms, and tools. This article establishes such a framework and reviews the state of the art in service composition from an unprecedented, holistic perspective
    corecore