26,778 research outputs found

    Vulnerability anti-patterns:a timeless way to capture poor software practices (Vulnerabilities)

    Get PDF
    There is a distinct communication gap between the software engineering and cybersecurity communities when it comes to addressing reoccurring security problems, known as vulnerabilities. Many vulnerabilities are caused by software errors that are created by software developers. Insecure software development practices are common due to a variety of factors, which include inefficiencies within existing knowledge transfer mechanisms based on vulnerability databases (VDBs), software developers perceiving security as an afterthought, and lack of consideration of security as part of the software development lifecycle (SDLC). The resulting communication gap also prevents developers and security experts from successfully sharing essential security knowledge. The cybersecurity community makes their expert knowledge available in forms including vulnerability databases such as CAPEC and CWE, and pattern catalogues such as Security Patterns, Attack Patterns, and Software Fault Patterns. However, these sources are not effective at providing software developers with an understanding of how malicious hackers can exploit vulnerabilities in the software systems they create. As developers are familiar with pattern-based approaches, this paper proposes the use of Vulnerability Anti-Patterns (VAP) to transfer usable vulnerability knowledge to developers, bridging the communication gap between security experts and software developers. The primary contribution of this paper is twofold: (1) it proposes a new pattern template – Vulnerability Anti-Pattern – that uses anti-patterns rather than patterns to capture and communicate knowledge of existing vulnerabilities, and (2) it proposes a catalogue of Vulnerability Anti-Patterns (VAP) based on the most commonly occurring vulnerabilities that software developers can use to learn how malicious hackers can exploit errors in software

    A Service based Development Environment on Web 2.0 Platforms

    Get PDF
    Governments are investing on the IT adoption and promoting the socalled e-economies as a way to improve competitive advantages. One of the main government’s actions is to provide internet access to the most part of the population, people and organisations. Internet provides the required support for connecting organizations, people and geographically distributed developments teams. Software developments are tightly related to the availability of tools and platforms needed for products developments. Internet is becoming the most widely used platform. Software forges such as SourceForge provide an integrated tools environment gathering a set of tools that are suited for each development with a low cost. In this paper we propose an innovating approach based on Web2.0, services and a method engineering approach for software developments. This approach represents one of the possible usages of the internet of the future

    Emergent digital services in public libraries : a domain study

    Get PDF
    Purpose: This paper explores the emergence of digital services in the public library domain via an extensive study of the websites of all Scottish public library services Design/methodology/approach: In a 4 month period all 32 of Scotland’s public library authority websites were visited by a researcher. The goal of the researcher was to record the options available from the library homepages in the following way: •Role of library in providing page content: content provider or access provider? •Was the page providing a digital service? •What was the audience for the page? Adult, child, or not specified? •Description of page content •Any noted usability issues Each site was only visited to three levels below that of the initial homepage. Findings: The study found a good standard of innovation in digital services around LMS functions, offering users the ability to keep in control of their borrowing and reserving. In addition there was a consistent set of electronic reference resources subscribed to by multiple libraries, offering high quality information both within the library and for library members from their home or workplace. Problems were found with regards to guidance on the usage of these resources, as well as confusion and inconsistency in terminology usage across different library services. Research limitations/implications: The paper examines only Scottish public library sites, thus can only claim to be representative of that country. It also can only represent the sites at the time they were examined. Practical implications: The paper should be of interest to public and other librarians interested in patterns across web sites in their sector. Originality/value: This is the first national study of Scottish public library websites and its findings should be of value as a result

    Structured Metadata for Direct Resource Location: A Case Study

    Get PDF
    This paper proposes that for scientific and technical information resources, a well-structured and high-quality metadata record contains enough information to find that resource on the Internet, and as a consequence, no additional human labour is needed to create or maintain any links. Research was performed by creating a control group of records from the Online Catalogue of the Food and Agriculture Organization of the United Nations and searching them in various ways in Google and Metacrawler. Based on results, this method was revised and used on the larger AGRIS database. Results showed not only that the method is successful; it is also highly useful for searching citations. A user interface is suggested, and changes to current cataloguing rules are discussed

    Optimising metadata to make high-value content more accessible to Google users

    Get PDF
    Purpose: This paper shows how information in digital collections that have been catalogued using high-quality metadata can be retrieved more easily by users of search engines such as Google. Methodology/approach: The research and proposals described arose from an investigation into the observed phenomenon that pages from the Glasgow Digital Library (gdl.cdlr.strath.ac.uk) were regularly appearing near the top of Google search results shortly after publication, without any deliberate effort to achieve this. The reasons for this phenomenon are now well understood and are described in the second part of the paper. The first part provides context with a review of the impact of Google and a summary of recent initiatives by commercial publishers to make their content more visible to search engines. Findings/practical implications: The literature research provides firm evidence of a trend amongst publishers to ensure that their online content is indexed by Google, in recognition of its popularity with Internet users. The practical research demonstrates how search engine accessibility can be compatible with use of established collection management principles and high-quality metadata. Originality/value: The concept of data shoogling is introduced, involving some simple techniques for metadata optimisation. Details of its practical application are given, to illustrate how those working in academic, cultural and public-sector organisations could make their digital collections more easily accessible via search engines, without compromising any existing standards and practices

    CC-interop : COPAC/Clumps Continuing Technical Cooperation. Final Project Report

    Get PDF
    As far as is known, CC-interop was the first project of its kind anywhere in the world and still is. Its basic aim was to test the feasibility of cross-searching between physical and virtual union catalogues, using COPAC and the three functioning "clumps" or virtual union catalogues (CAIRNS, InforM25, and RIDING), all funded or part-funded by JISC in recent years. The key issues investigated were technical interoperability of catalogues, use of collection level descriptions to search union catalogues dynamically, quality of standards in cataloguing and indexing practices, and usability of union catalogues for real users. The conclusions of the project were expected to, and indeed do, contribute to the development of the JISC Information Environment and to the ongoing debate as to the feasibility and desirability of creating a national UK catalogue. They also inhabit the territory of collection level descriptions (CLDs) and the wider services of JISC's Information Environment Services Registry (IESR). The results of this project will also have applicability for the common information environment, particularly through the landscaping work done via SCONE/CAIRNS. This work is relevant not just to HE and not just to digital materials, but encompasses other sectors and domains and caters for print resources as well. Key findings are thematically grouped as follows: System performance when inter-linking COPAC and the Z39.50 clumps. The various individual Z39.50 configurations permit technical interoperability relatively easily but only limited semantic interoperability is possible. Disparate cataloguing and indexing practices are an impairment to semantic interoperability, not just for catalogues but also for CLDs and descriptions of services (like those constituting JISC's IESR). Creating dynamic landscaping through CLDs: routines can be written to allow collection description databases to be output in formats that other UK users of CLDs, including developers of the JISC information environment. Searching a distributed (virtual) catalogue or clump via Z39.50: use of Z39.50 to Z39.50 middleware permits a distributed catalogue to be searched via Z39.50 from such disparate user services as another virtual union catalogue or clump, a physical union catalogue like COPAC, an individual Z client and other IE services. The breakthrough in this Z39.50 to Z39.50 conundrum came with the discovery that the JISC-funded JAFER software (a result of the 5/99 programme) meets many of the requirements and can be used by the current clumps services. It is technically possible for the user to select all or a sub-set of available end destination Z39.50 servers (we call this "landscaping") within this middleware. Comparing results processing between COPAC and clumps. Most distributed services (clumps) do not bring back complete results sets from associated Z servers (in order to save time for users). COPAC on-the-fly routines could feasibly be applied to the clumps services. An automated search set up to repeat its query of 17 catalogues in a clump (InforM25) hourly over nearly 3 months returned surprisingly good results; for example, over 90% of responses were received in less than one second, and no servers showed slower response times in periods of traditionally heavy OPAC use (mid-morning to early evening). User behaviour when cross-searching catalogues: the importance to users of a number of on-screen features, including the ability to refine a search and clear indication that a search is processing. The importance to users of information about the availability of an item as well as the holdings data. The impact of search tools such as Google and Amazon on user behaviour and the expectations of more information than is normally available from a library catalogue. The distrust of some librarians interviewed of the data sources in virtual union catalogues, thinking that there was not true interoperability
    • …
    corecore