2,638 research outputs found

    Identification-method research for open-source software ecosystems

    Get PDF
    In recent years, open-source software (OSS) development has grown, with many developers around the world working on different OSS projects. A variety of open-source software ecosystems have emerged, for instance, GitHub, StackOverflow, and SourceForge. One of the most typical social-programming and code-hosting sites, GitHub, has amassed numerous open-source-software projects and developers in the same virtual collaboration platform. Since GitHub itself is a large open-source community, it hosts a collection of software projects that are developed together and coevolve. The great challenge here is how to identify the relationship between these projects, i.e., project relevance. Software-ecosystem identification is the basis of other studies in the ecosystem. Therefore, how to extract useful information in GitHub and identify software ecosystems is particularly important, and it is also a research area in symmetry. In this paper, a Topic-based Project Knowledge Metrics Framework (TPKMF) is proposed. By collecting the multisource dataset of an open-source ecosystem, project-relevance analysis of the open-source software is carried out on the basis of software-ecosystem identification. Then, we used our Spectral Clustering algorithm based on Core Project (CP-SC) to identify software-ecosystem projects and further identify software ecosystems. We verified that most software ecosystems usually contain a core software project, and most other projects are associated with it. Furthermore, we analyzed the characteristics of the ecosystem, and we also found that interactive information has greater impact on project relevance. Finally, we summarize the Topic-based Project Knowledge Metrics Framework

    Information management in work organization domain in network organizations

    Get PDF
    Tese de mestrado. Engenharia Electrotécnica e de Computadores. Faculdade de Engenharia. Universidade do Porto. 200

    CHOReOS Middleware Specification (D3.1)

    Get PDF
    This deliverable specifies the main concepts of the CHOReOS middleware architecture. Starting from the Future Internet (FI) challenges for scalability, heterogeneity, mobility, awareness, and adaptation that have been investigated in prior work done in WP1, we introduce the aforementioned concepts to deal with the requirements derived from the FI challenges. In particular, we propose an extensible and scalable service discovery approach for the organization and discovery of services that relies on multiple service discovery protocols. Moreover, we introduce an extensible and scalable approach, based on the service bus paradigm, for service access that features the integration and adaptation of multiple interaction protocols. Furthermore, we propose solutions that enable the execution of FI service compositions that range from compositions of choreographed services, developed according to the CHOReOS development process, to massive compositions of things. Finally, we detail the Cloud & Grid middleware facilities that support the overall middleware and the choreographies that are built on it, via a unified API that provides access to multiple cloud infrastructures (e.g., Amazon EC2, HP Open Cirrus, private clouds)

    A study of EU data protection regulation and appropriate security for digital services and platforms

    Get PDF
    A law often has more than one purpose, more than one intention, and more than one interpretation. A meticulously formulated and context agnostic law text will still, when faced with a field propelled by intense innovation, eventually become obsolete. The European Data Protection Directive is a good example of such legislation. It may be argued that the technological modifications brought on by the EU General Data Protection Regulation (GDPR) are nominal in comparison to the previous Directive, but from a business perspective the changes are significant and important. The Directive’s lack of direct economic incentive for companies to protect personal data has changed with the Regulation, as companies may now have to pay severe fines for violating the legislation. The objective of the thesis is to establish the notion of trust as a key design goal for information systems handling personal data. This includes interpreting the EU legislation on data protection and using the interpretation as a foundation for further investigation. This interpretation is connected to the areas of analytics, security, and privacy concerns for intelligent service development. Finally, the centralised platform business model and its challenges is examined, and three main resolution themes for regulating platform privacy are proposed. The aims of the proposed resolutions are to create a more trustful relationship between providers and data subjects, while also improving the conditions for competition and thus providing data subjects with service alternatives. The thesis contributes new insights into the evolving privacy practices in the digital society at an important time of transition from the service driven business models to the platform business models. Firstly, privacy-related regulation and state of the art analytics development are examined to understand their implications for intelligent services that are based on automated processing and profiling. The ability to choose between providers of intelligent services is identified as the core challenge. Secondly, the thesis examines what is meant by appropriate security for systems that handle personal data, something the GDPR requires that organisations use without however specifying what can be considered appropriate. We propose a method for active network security in web software that is developed through the use of analytics for detection and by inserting data generators into a software installation. The active network security method is proposed as a framework for achieving compliance with the GDPR requirements for services and platforms to use appropriate security. Thirdly, the platform business model is considered from the privacy point of view and the implication of “processing silos” for intelligent services. The centralised platform model is considered problematic from both the data subject and from the competition standpoint. A resolution is offered for enabling user-initiated open data flow to counter the centralised “processing silos”, and thereby to facilitate the introduction of decentralised platforms. The thesis provides an interdisciplinary analysis considering the legal study (lex lata) and additionally the resolution (lex ferenda) is defined through argumentativist legal dogmatics and (de lege ferenda) of how the legal framework ought to be adapted to fit the described environment. User-friendly Legal Science is applied as a theory framework to provide a holistic approach to answering the research questions. The User-friendly Legal Science theory has its roots in design science and offers a way towards achieving interdisciplinary research in the fields of information systems and legal science

    The Impact of Rogue Nodes on the Dependability of Opportunistic Networks

    Get PDF
    Opportunistic Networks (OppNets) are an extension to the classical Mobile Ad hoc Networks (MANETs) where the network is not dependent on any infrastructure (e.g. Access Points or centralized administrative nodes). OppNets can be more flexible than MANETs because an end to end path does not exist and much longer delays can be expected. Whereas a Rogue Access Point is typically immobile in the legacy infrastructure based networks and can have considerable impact on the overall connectivity, the research question in this project evaluates how the pattern and mobility of a rogue nodes impact the dependability and overall "Average Latency" in an Opportunistic Network Environment. We have simulated a subset of the mathematical modeling performed in a previous publication in this regard. Ad hoc networks are very challenging to model due to their mobility and intricate routing schemes. We strategically started our research by exploring the evolution of Opportunistic networks, and then implemented the rogue behavior by utilizing The ONE (Opportunistic Network Environment, by Nokia Research Centre) simulator to carry out our research over rogue behavior. The ONE simulator is an open source simulator developed in Java, simulating the layer 3 of the OSI model. The Rogue behavior is implemented in the simulator to observe the effect of rogue nodes. Finally we extracted the desired dataset to measure the latency by carefully simulating the intended behavior, keeping rest of the parameters (e.g. Node Movement Models, Signal Range and Strength, Point of Interest (POI) etc) unchanged. Our results are encouraging, and coincide with the average latency deterioration patterns as modeled by the previous researchers, with a few exceptions. The practical implementation of plug-in in ONE simulator has shown that only a very high degree of rogue nodes impact the latency, making OppNets more resilient and less vulnerable to malicious attacks

    A Policy-Based Resource Brokering Environment for Computational Grids

    Get PDF
    With the advances in networking infrastructure in general, and the Internet in particular, we can build grid environments that allow users to utilize a diverse set of distributed and heterogeneous resources. Since the focus of such environments is the efficient usage of the underlying resources, a critical component is the resource brokering environment that mediates the discovery, access and usage of these resources. With the consumer\u27s constraints, provider\u27s rules, distributed heterogeneous resources and the large number of scheduling choices, the resource brokering environment needs to decide where to place the user\u27s jobs and when to start their execution in a way that yields the best performance for the user and the best utilization for the resource provider. As brokering and scheduling are very complicated tasks, most current resource brokering environments are either specific to a particular grid environment or have limited features. This makes them unsuitable for large applications with heterogeneous requirements. In addition, most of these resource brokering environments lack flexibility. Policies at the resource-, application-, and system-levels cannot be specified and enforced to provide commitment to the guaranteed level of allocation that can help in attracting grid users and contribute to establishing credibility for existing grid environments. In this thesis, we propose and prototype a flexible and extensible Policy-based Resource Brokering Environment (PROBE) that can be utilized by various grid systems. In designing PROBE, we follow a policy-based approach that provides PROBE with the intelligence to not only match the user\u27s request with the right set of resources, but also to assure the guaranteed level of the allocation. PROBE looks at the task allocation as a Service Level Agreement (SLA) that needs to be enforced between the resource provider and the resource consumer. The policy-based framework is useful in a typical grid environment where resources, most of the time, are not dedicated. In implementing PROBE, we have utilized a layered architecture and façade design patterns. These along with the well-defined API, make the framework independent of any architecture and allow for the incorporation of different types of scheduling algorithms, applications and platform adaptors as the underlying environment requires. We have utilized XML as a base for all the specification needs. This provides a flexible mechanism to specify the heterogeneous resources and user\u27s requests along with their allocation constraints. We have developed XML-based specifications by which high-level internal structures of resources, jobs and policies can be specified. This provides interoperability in which a grid system can utilize PROBE to discover and use resources controlled by other grid systems. We have implemented a prototype of PROBE to demonstrate its feasibility. We also describe a test bed environment and the evaluation experiments that we have conducted to demonstrate the usefulness and effectiveness of our approach

    A web-based approach to engineering adaptive collaborative applications

    Get PDF
    Current methods employed to develop collaborative applications have to make decisions and speculate about the environment in which the application will operate within, the network infrastructure that will be used and the device type the application will operate on. These decisions and assumptions about the environment in which collaborative applications were designed to work are not ideal. These methods produce collaborative applications that are characterised as being inflexible, working on homogeneous networks and single platforms, requiring pre-existing knowledge of the data and information types they need to use and having a rigid choice of architecture. On the other hand, future collaborative applications are required to be flexible; to work in highly heterogeneous environments; be adaptable to work on different networks and on a range of device types. This research investigates the role that the Web and its various pervasive technologies along with a component-based Grid middleware can play to address these concerns. The aim is to develop an approach to building adaptive collaborative applications that can operate on heterogeneous and changing environments. This work proposes a four-layer model that developers can use to build adaptive collaborative applications. The four-layer model is populated with Web technologies such as Scalable Vector Graphics (SVG), the Resource Description Framework (RDF), Protocol and RDF Query Language (SPARQL) and Gridkit, a middleware infrastructure, based on the Open Overlays concept. The Middleware layer (the first layer of the four-layer model) addresses network and operating system heterogeneity, the Group Communication layer enables collaboration and data sharing, while the Knowledge Representation layer proposes an interoperable RDF data modelling language and a flexible storage facility with an adaptive architecture for heterogeneous data storage. And finally there is the Presentation and Interaction layer which proposes a framework (Oea) for scalable and adaptive user interfaces. The four layer model has been successfully used to build a collaborative application, called Wildfurt that overcomes challenges facing collaborative applications. This research has demonstrated new applications for cutting-edge Web technologies in the area of building collaborative applications. SVG has been used for developing superior adaptive and scalable user interfaces that can operate on different device types. RDF and RDFS, have also been used to design and model collaborative applications providing a mechanism to define classes and properties and the relationships between them. A flexible and adaptable storage facility that is able to change its architecture based on the surrounding environments and requirements has also been achieved by combining the RDF technology with the Open Overlays middleware, Gridkit

    Common Educational Teleoperation Platform for Robotics Utilizing Digital Twins

    Get PDF
    The erratic modern world introduces challenges to all sectors of societies and potentially introduces additional inequality. One possibility to decrease the educational inequality is to provide remote access to facilities that enable learning and training. A similar approach of remote resource usage can be utilized in resource-poor situations where the required equipment is available at other premises. The concept of Industry 5.0 (i5.0) focuses on a human-centric approach, enabling technologies to concentrate on human–machine interaction and emphasizing the importance of societal values. This paper introduces a novel robotics teleoperation platform supported by the i5.0. The platform reduces inequality and allows usage and learning of robotics remotely independently of time and location. The platform is based on digital twins with bi-directional data transmission between the physical and digital counterparts. The proposed system allows teleoperation, remote programming, and near real-time monitoring of controlled robots, robot time scheduling, and social interaction between users. The system design and implementation are described in detail, followed by experimental results

    Decentralized control methodology for multi-machine/multi-converter power systems

    Full text link
    In this project we evaluate a framework for synchronization of mixed machine-converter power grids. Synchronous machines are assumed to be actuated by mechanical torque injections, while the converters by DC-side current injections. As this approach is based on model-matching, the converter's modulation angle is driven by the DC-side voltage measurement, while its modulation amplitude is assigned analogously to the electrical machine's excitation current. In this way we provide extensions to the swing-equations model, retaining physical interpretation, and design controllers that achieve various objectives: frequency synchronization while stabilizing an angle configuration and a bus voltage magnitude prescribed by an optimal power flow (OPF) set-point. We further discuss decentralization issues related to clock drifts, loopy graphs, model reduction, energy function selection and characterizations of operating points. Finally, a numerical evaluation is based on experiments from three- and two-bus systems.Comment: 38 pages, Semester Thesis at ETH Zuric
    corecore