111 research outputs found

    A methodology and a platform to measure and assess software windows of vulnerability

    Get PDF
    Nowadays, it is impossible not to recognize how software solutions have changed the world and the crucial role they play in our daily life. With their quick spread, especially in Cloud and Internet of Things contexts, security risks to which they are exposed have risen as well. Unfortunately, even if a lot of techniques have been realized to protect infrastructures from attackers, they are not enough to achieve truly secure systems. Therefore, since the price to pay for recovering from an outbreak can be enormous, organizations need a way to assess security of products they use. A useful and very overlooked metric that can be considered in this situations is the software window of vulnerability, which is the amount of time a software has been vulnerable to an attack. The main reason why this metric is often neglected is because the information required to compute it are provided by heterogeneous sources, and there is not a standard framework or at least a model that can simplify the task. Hence, the aim of this thesis will be filling this lack, at first by defining a model to evaluate software windows of vulnerability and then by implementing a platform able to compute this metric for software of different systems. Since keeping the approach general is not feasible outside of the theoretical model, the implementation step will necessarily require a system specific choice. Therefore, GNU/Linux systems were selected specifically for two reasons: their recent rise in popularity in the previously mentioned fields and their software management policy (which is based on package managers) that allows to find the data required by the analysis more easily

    Workset Creation for Scholarly Analysis: Recommendations and Prototyping Project Reports

    Get PDF
    This document assembles and describes the outcomes of the four prototyping projects undertaken as part of the Workset Creation for Scholarly Analysis (WCSA) research project (2013 – 2015). Each prototyping project team provided its own final report. These reports are assembled together and included in this document. Based on the totality of results reported, the WCSA project team also provide a set of overarching recommendations for HTRC implementation and adoption of research conducted by the Prototyping Project teams. The work described here was made possible through the generous support of The Andrew W. Mellon Foundation (Grant Ref # 21300666).The Andrew W. Mellon Foundation (Grant Ref # 21300666)Ope

    Workshop Report: Campus Bridging: Reducing Obstacles on the Path to Big Answers 2015

    Get PDF
    For the researcher whose experiments require large-scale cyberinfrastructure, there exists significant challenges to successful completion. These challenges are broad and go far beyond the simple issue that there are not enough large-scale resources available; these solvable issues range from a lack of documentation written for a non-technical audience to a need for greater consistency with regard to system configuration and consistent software configuration and availability on the large-scale resources at national tier supercomputing centers, with a number of other challenges existing alongside the ones mentioned here. Campus Bridging is a relatively young discipline that aims to mitigate these issues for the academic end-user, for whom the entire process can feel like a path comprised entirely of obstacles. The solutions to these problems must by necessity include multiple approaches, with focus not only on the end user but on the system administrators responsible for supporting these resources as well as the systems themselves. These system resources include not only those at the supercomputing centers but also those that exist at the campus or departmental level and even on the personal computing devices the researcher uses to complete his or her work. This workshop report compiles the results of a half-day workshop, held in conjunction with IEEE Cluster 2015 in Chicago, IL.NSF XSED

    GIS Processing for Geocoding Described Collection Locations

    Get PDF
    Much useful data is currently not available for use in contemporary geographic information systems because location is provided as descriptive text and not in a recognized coordinate system format. This is particularly true for datasets with significant temporal depth such as museum collections. Development is just beginning on applications that automate the conversion of descriptive text based locations to geographic coordinate values. These applications are a type of geocoding or locator service and require functionality in two domains: natural language processing and geometric calculation. Natural language processing identifies the spatial semantics of the text describing a location and tags the individual text elements according to their spatially descriptive role. This is referred to as geoparsing. Once identified, these tagged text elements can be either converted directly to numeric values or used as pointers to geometric objects that represent geographic features identified in the description. These values and geometries can be employed in a series of functions to determine coordinates for the described location. This is referred to as geoprocessing. Selection of appropriate text elements from a location description and ancillary data as input is critical for successful geocoding. The traverse, one of many types of location description is selected for geocoding development. Specific text elements with spatial meaning are identified and incorporated into an XML format for use as geoprocessing input. Information associated with the location is added to the XML format to maintain database relations and geoprocessing error checking functionality. ESRI’s ArcGIS 8.3 is used as a development environment where geoprocessing functionality is tested for XML elements using ArcObjects and VBA forms

    Self-Adaptive Performance Monitoring for Component-Based Software Systems

    Get PDF
    Effective monitoring of a software system’s runtime behavior is necessary to evaluate the compliance of performance objectives. This thesis has emerged in the context of the Kieker framework addressing application performance monitoring. The contribution includes a self-adaptive performance monitoring approach allowing for dynamic adaptation of the monitoring coverage at runtime. The monitoring data includes performance measures such as throughput and response time statistics, the utilization of system resources, as well as the inter- and intra-component control flow. Based on this data, performance anomaly scores are computed using time series analysis and clustering methods. The self-adaptive performance monitoring approach reduces the business-critical failure diagnosis time, as it saves time-consuming manual debugging activities. The approach and its underlying anomaly scores are extensively evaluated in lab experiments

    24th International Conference on Information Modelling and Knowledge Bases

    Get PDF
    In the last three decades information modelling and knowledge bases have become essentially important subjects not only in academic communities related to information systems and computer science but also in the business area where information technology is applied. The series of European – Japanese Conference on Information Modelling and Knowledge Bases (EJC) originally started as a co-operation initiative between Japan and Finland in 1982. The practical operations were then organised by professor Ohsuga in Japan and professors Hannu Kangassalo and Hannu Jaakkola in Finland (Nordic countries). Geographical scope has expanded to cover Europe and also other countries. Workshop characteristic - discussion, enough time for presentations and limited number of participants (50) / papers (30) - is typical for the conference. Suggested topics include, but are not limited to: 1. Conceptual modelling: Modelling and specification languages; Domain-specific conceptual modelling; Concepts, concept theories and ontologies; Conceptual modelling of large and heterogeneous systems; Conceptual modelling of spatial, temporal and biological data; Methods for developing, validating and communicating conceptual models. 2. Knowledge and information modelling and discovery: Knowledge discovery, knowledge representation and knowledge management; Advanced data mining and analysis methods; Conceptions of knowledge and information; Modelling information requirements; Intelligent information systems; Information recognition and information modelling. 3. Linguistic modelling: Models of HCI; Information delivery to users; Intelligent informal querying; Linguistic foundation of information and knowledge; Fuzzy linguistic models; Philosophical and linguistic foundations of conceptual models. 4. Cross-cultural communication and social computing: Cross-cultural support systems; Integration, evolution and migration of systems; Collaborative societies; Multicultural web-based software systems; Intercultural collaboration and support systems; Social computing, behavioral modeling and prediction. 5. Environmental modelling and engineering: Environmental information systems (architecture); Spatial, temporal and observational information systems; Large-scale environmental systems; Collaborative knowledge base systems; Agent concepts and conceptualisation; Hazard prediction, prevention and steering systems. 6. Multimedia data modelling and systems: Modelling multimedia information and knowledge; Contentbased multimedia data management; Content-based multimedia retrieval; Privacy and context enhancing technologies; Semantics and pragmatics of multimedia data; Metadata for multimedia information systems. Overall we received 56 submissions. After careful evaluation, 16 papers have been selected as long paper, 17 papers as short papers, 5 papers as position papers, and 3 papers for presentation of perspective challenges. We thank all colleagues for their support of this issue of the EJC conference, especially the program committee, the organising committee, and the programme coordination team. The long and the short papers presented in the conference are revised after the conference and published in the Series of “Frontiers in Artificial Intelligence” by IOS Press (Amsterdam). The books “Information Modelling and Knowledge Bases” are edited by the Editing Committee of the conference. We believe that the conference will be productive and fruitful in the advance of research and application of information modelling and knowledge bases. Bernhard Thalheim Hannu Jaakkola Yasushi Kiyok

    A theory and model for the evolution of software services

    Get PDF
    Software services are subject to constant change and variation. To control service development, a service developer needs to know why a change was made, what are its implications and whether the change is complete. Typically, service clients do not perceive the upgraded service immediately. As a consequence, service-based applications may fail on the service client side due to changes carried out during a provider service upgrade. In order to manage changes in a meaningful and effective manner service clients must therefore be considered when service changes are introduced at the service provider's side. Otherwise such changes will most certainly result in severe application disruption. Eliminating spurious results and inconsistencies that may occur due to uncontrolled changes is therefore a necessary condition for the ability of services to evolve gracefully, ensure service stability, and handle variability in their behavior. Towards this goal, this work presents a model and a theoretical framework for the compatible evolution of services based on well-founded theories and techniques from a number of disparate fields.
    • …
    corecore