2,736 research outputs found

    Table of Contents Overview and Terms of Reference...................................................................................................... 1

    Get PDF
    Aims of the programme................................................................................................................... 2 Analysis................................................................................................................................................

    City Open Data Policies

    Get PDF
    The capture and analysis of data is transforming the 21st Century. As society becomes more data driven, data has the ability to drive the bottom line for private companies and help the public sector to define where and how services can best be delivered. In City Open Data Policies: Learning by Doing, the National League of Cities identifies how cities can take advantage of the opportunities presented by open data initiatives.SUMMARY OF RECOMMENDATIONSLeadership: Political support stands out as one of the key requirements to implementing a successful open data project.Appropriate Legislation: Enacting legislation or formal policies is a crucial step toward ensuring the growth and sustainability of open data portals. Funding: Open data initiatives do not require high levels of funding. It is, however, important that the programs have their own budget line items where resources are specifically allocated. Technical Approach: Leading U.S. cities rely on commercial platforms that facilitate the implementation of open data initiatives, provide technical expertise, and ensure 24/7 customer support, often at a lower cost than providing these services in-house. Stakeholder Involvement: Open data is a two-way process. It is, therefore, essential to encourage participation and engagement among multiple stakeholders including: community members; non-profits; universities; the press; businesses; city departments; and other levels of government. Many cities adopt a flexible, and usually informal, approach to interact with the stakeholders. Measuring Success: Developing evaluation tools should be an integral part of any future open data policies

    A First Look at the Deprecation of RESTful APIs: An Empirical Study

    Full text link
    REpresentational State Transfer (REST) is considered as one standard software architectural style to build web APIs that can integrate software systems over the internet. However, while connecting systems, RESTful APIs might also break the dependent applications that rely on their services when they introduce breaking changes, e.g., an older version of the API is no longer supported. To warn developers promptly and thus prevent critical impact on downstream applications, a deprecated-removed model should be followed, and deprecation-related information such as alternative approaches should also be listed. While API deprecation analysis as a theme is not new, most existing work focuses on non-web APIs, such as the ones provided by Java and Android. To investigate RESTful API deprecation, we propose a framework called RADA (RESTful API Deprecation Analyzer). RADA is capable of automatically identifying deprecated API elements and analyzing impacted operations from an OpenAPI specification, a machine-readable profile for describing RESTful web service. We apply RADA on 2,224 OpenAPI specifications of 1,368 RESTful APIs collected from APIs.guru, the largest directory of OpenAPI specifications. Based on the data mined by RADA, we perform an empirical study to investigate how the deprecated-removed protocol is followed in RESTful APIs and characterize practices in RESTful API deprecation. The results of our study reveal several severe deprecation-related problems in existing RESTful APIs. Our implementation of RADA and detailed empirical results are publicly available for future intelligent tools that could automatically identify and migrate usage of deprecated RESTful API operations in client code

    Human Factors in Secure Software Development

    Get PDF
    While security research has made significant progress in the development of theoretically secure methods, software and algorithms, software still comes with many possible exploits, many of those using the human factor. The human factor is often called ``the weakest link'' in software security. To solve this, human factors research in security and privacy focus on the users of technology and consider their security needs. The research then asks how technology can serve users while minimizing risks and empowering them to retain control over their own data. However, these concepts have to be implemented by developers whose security errors may proliferate to all of their software's users. For example, software that stores data in an insecure way, does not secure network traffic correctly, or otherwise fails to adhere to secure programming best practices puts all of the software's users at risk. It is therefore critical that software developers implement security correctly. However, in addition to security rarely being a primary concern while producing software, developers may also not have extensive awareness, knowledge, training or experience in secure development. A lack of focus on usability in libraries, documentation, and tools that they have to use for security-critical components may exacerbate the problem by blowing up the investment of time and effort needed to "get security right". This dissertation's focus is how to support developers throughout the process of implementing software securely. This research aims to understand developers' use of resources, their mindsets as they develop, and how their background impacts code security outcomes. Qualitative, quantitative and mixed methods were employed online and in the laboratory, and large scale datasets were analyzed to conduct this research. This research found that the information sources developers use can contribute to code (in)security: copying and pasting code from online forums leads to achieving functional code quickly compared to using official documentation resources, but may introduce vulnerable code. We also compared the usability of cryptographic APIs, finding that poor usability, unsafe (possibly obsolete) defaults and unhelpful documentation also lead to insecure code. On the flip side, well-thought out documentation and abstraction levels can help improve an API's usability and may contribute to secure API usage. We found that developer experience can contribute to better security outcomes, and that studying students in lieu of professional developers can produce meaningful insights into developers' experiences with secure programming. We found that there is a multitude of online secure development advice, but that these advice sources are incomplete and may be insufficient for developers to retrieve help, which may cause them to choose un-vetted and potentially insecure resources. This dissertation supports that (a) secure development is subject to human factor challenges and (b) security can be improved by addressing these challenges and supporting developers. The work presented in this dissertation has been seminal in establishing human factors in secure development research within the security and privacy community and has advanced the dialogue about the rigorous use of empirical methods in security and privacy research. In these research projects, we repeatedly found that usability issues of security and privacy mechanisms, development practices, and operation routines are what leads to the majority of security and privacy failures that affect millions of end users

    Mining Biological Pathways Using WikiPathways Web Services

    Get PDF
    WikiPathways is a platform for creating, updating, and sharing biological pathways [1]. Pathways can be edited and downloaded using the wiki-style website. Here we present a SOAP web service that provides programmatic access to WikiPathways that is complementary to the website. We describe the functionality that this web service offers and discuss several use cases in detail. Exposing WikiPathways through a web service opens up new ways of utilizing pathway information and assisting the community curation process

    API Recommendation Using Domain And Source Code Knowledge

    Get PDF
    The process of migration the old retired API(Application Programming Interface) with new and most to up to date one, know as API migration. Developers need to fully understand the documentation for the retired (replaced) library and the new (replacing) library to do the appropriate migration. This manual process is complex, error-prone, and costly for companies. There have been many studies focused on the automation recommendation between different method mapping for different libraries. These studies focused on the recommendations between methods from different programming languages while non of them focused on the recommendations between methods of libraries that belong to the same programming language. At times, one of the studies indicates automatic recommendation when mapping two different methods libraries that belong to the same programming language by using domain knowledge(method description, method parameters|name). In this thesis, we investigated the mapping between two methods of library migrations by using the domain knowledge and source code documentation. In order to be able to obtain these scenarios, we propose the RAPIM++ machine learning approach which recommends a correct mapping between source and target methods of three-party libraries using domain knowledge and source code knowledge. Our main contribution in this studywas, build a model which depends on existing library changes done manually from previous developers in different open source projects in java programming language then use features related to source code implementation, the similarity between method signatures and methods documentation to predict correct method mapping between two methods level library migration. Our result was RAPIM++ was able to successfully mapping between two methods from different third-party libraries with a rate of accuracy score of 84.4%. Additionally, our approach could able to recommend the libraries that absent the documentations since it relies on the source code knowledge along with the main knowledge. We can conclude from these results that RAPIM++ able to recommend third-party libraries with or without documentation, so though libraries that are not well known and do not belong to popular frameworks, can find comprehensive recommendations when using our model. Furthermore, RAPIM++ provides the research and industry community with a lightweight web service that available publicly to make method mapping between third - part libraries an easy task for developers
    • …
    corecore