10,095 research outputs found
Economia colaborativa
A importância de se proceder à análise dos principais desafios jurídicos que a economia colaborativa coloca – pelas implicações que as mudanças de paradigma dos modelos de negócios e dos sujeitos envolvidos suscitam − é indiscutível, correspondendo à necessidade de se fomentar a segurança jurídica destas práticas, potenciadoras de crescimento económico e bem-estar social.
O Centro de Investigação em Justiça e Governação (JusGov) constituiu uma equipa multidisciplinar que, além de juristas, integra investigadores de outras áreas, como a economia e a gestão, dos vários grupos do JusGov – embora com especial participação dos investigadores que integram o grupo E-TEC (Estado, Empresa e Tecnologia) – e de outras prestigiadas instituições nacionais e internacionais, para desenvolver um projeto neste domínio, com o objetivo de identificar os problemas jurídicos que a economia colaborativa suscita e avaliar se já existem soluções para aqueles, refletindo igualmente sobre a conveniência de serem introduzidas alterações ou se será mesmo necessário criar nova regulamentação.
O resultado desta investigação é apresentado nesta obra, com o que se pretende fomentar a continuação do debate sobre este tema.Esta obra é financiada por fundos nacionais através da FCT — Fundação para a Ciência e a Tecnologia, I.P., no âmbito do Financiamento UID/05749/202
Technical Dimensions of Programming Systems
Programming requires much more than just writing code in a programming language. It is usually done in the context of a stateful environment, by interacting with a system through a graphical user interface. Yet, this wide space of possibilities lacks a common structure for navigation. Work on programming systems fails to form a coherent body of research, making it hard to improve on past work and advance the state of the art.
In computer science, much has been said and done to allow comparison of programming languages, yet no similar theory exists for programming systems; we believe that programming systems deserve a theory too.
We present a framework of technical dimensions which capture the underlying characteristics of programming systems and provide a means for conceptualizing and comparing them.
We identify technical dimensions by examining past influential programming systems and reviewing their design principles, technical capabilities, and styles of user interaction. Technical dimensions capture characteristics that may be studied, compared and advanced independently. This makes it possible to talk about programming systems in a way that can be shared and constructively debated rather than relying solely on personal impressions.
Our framework is derived using a qualitative analysis of past programming systems. We outline two concrete ways of using our framework. First, we show how it can analyze a recently developed novel programming system. Then, we use it to identify an interesting unexplored point in the design space of programming systems.
Much research effort focuses on building programming systems that are easier to use, accessible to non-experts, moldable and/or powerful, but such efforts are disconnected. They are informal, guided by the personal vision of their authors and thus are only evaluable and comparable on the basis of individual experience using them. By providing foundations for more systematic research, we can help programming systems researchers to stand, at last, on the shoulders of giants
Defining Service Level Agreements in Serverless Computing
The emergence of serverless computing has brought significant advancements to the delivery of computing resources to cloud users. With the abstraction of infrastructure, ecosystem, and execution environments, users could focus on their code while relying on the cloud provider to manage the abstracted layers. In addition, desirable features such as autoscaling and high availability became a provider’s responsibility and can be adopted by the user\u27s application at no extra overhead.
Despite such advancements, significant challenges must be overcome as applications transition from monolithic stand-alone deployments to the ephemeral and stateless microservice model of serverless computing. These challenges pertain to the uniqueness of the conceptual and implementation models of serverless computing. One of the notable challenges is the complexity of defining Service Level Agreements (SLA) for serverless functions. As the serverless model shifts the administration of resources, ecosystem, and execution layers to the provider, users become mere consumers of the provider’s abstracted platform with no insight into its performance. Suboptimal conditions of the abstracted layers are not visible to the end-user who has no means to assess their performance. Thus, SLA in serverless computing must take into consideration the unique abstraction of its model.
This work investigates the Service Level Agreement (SLA) modeling of serverless functions\u27 and serverless chains’ executions. We highlight how serverless SLA fundamentally differs from earlier cloud delivery models. We then propose an approach to define SLA for serverless functions by utilizing resource utilization fingerprints for functions\u27 executions and a method to assess if executions adhere to that SLA. We evaluate the approach’s accuracy in detecting SLA violations for a broad range of serverless application categories. Our validation results illustrate a high accuracy in detecting SLA violations resulting from resource contentions and provider’s ecosystem degradations. We conclude by presenting the empirical validation of our proposed approach, which could detect Execution-SLA violations with accuracy up to 99%
Recommended from our members
After Creation: Intergovernmental Organizations and Member State Governments as Co-Participants in an Authority Relationship
This is a re-amalgamation of what started as one manuscript and became two when the length proved to be more than any publisher wanted to consider. The splitting consisted of removing what are now Parts 3, 4, and 5 so that the manuscript focused on the outcome-related shared beliefs holding an authority relationship together. Those parts were last worked on in 2018. The rest were last worked on in late 2021 but also remain incomplete.
The relational approach adopted in this study treats intergovernmental organizations and the governments of member states as co-participants in an authority relationship with the governments of their member states. Authority relationships link two types of actor, defined by their authority-holder or addressee role in the relationship, through a set of shared beliefs about why the relationship exists and how the participants should fulfill their respective roles. The IGO as authority holder has a role that includes a right to instruct other actors about what they should or should not do; the governments of member states as addressees are expected to comply with the instructions. Three sets of shared beliefs provide the conceptual “glue” holding the relationship together. The first defines the goal of the collective effort, providing both the rationale for having the authority relationship and providing a lode star for assessments of the collective effort’s success or lack of success. The second set defines the shared understanding about allocation of roles and the process of interaction by establishing shared expectations about a) the selection process by which particular actors acquire authority holder roles, b) the definitions identifying one or more categories of addressees expected to follow instructions, and c) the procedures through which the authority holder issues instructions. The third set focus on the outcomes of cooperation through the relationship by defining a) the substantive areas in which the authority holder may issue instructions, b) the bases for assessing the relevance actions mandated in instructions for reaching the goal, and c) the relative efficacy of action paths chosen for reaching the goal as compared to other possible action paths.
Using an authority relationship framework for analyzing cooperation through IGOs highlights the inherently bi-directional nature of IGO-member government activity by viewing their interaction as involving a three-step process in which the IGO as authority holder decides when to issue what instruction, the member state governments as followers react to the instruction with anything from prompt and full compliance through various forms of pushback to outright rejection, and the IGO as authority holder responds to how the followers react with efforts to increase individual compliance with instructions and reinforce continuing acceptance of the authority relationship. Foregrounding the dynamics produced by the interaction of these two streams of perception and action reveals more clearly how far intergovernmental organizations acquire capacity to operate as independent actors, the dynamic ways they maintain that capacity, and how much they influence member governments’ beliefs and actions at different times. The approach fosters better understanding of why, when, and for how long governments choose cooperation through an IGO even in periods of rising unilateralism
A productive response to legacy system petrification
Requirements change. The requirements of a legacy information system change, often in unanticipated ways, and at a more rapid pace than the rate at which the information system itself can be evolved to support them. The capabilities of a legacy system progressively fall further and further behind their evolving requirements, in a degrading process termed petrification. As systems petrify, they deliver diminishing business value, hamper business effectiveness, and drain organisational resources. To address legacy systems, the first challenge is to understand how to shed their resistance to tracking requirements change. The second challenge is to ensure that a newly adaptable system never again petrifies into a change resistant legacy system. This thesis addresses both challenges. The approach outlined herein is underpinned by an agile migration process - termed Productive Migration - that homes in upon the specific causes of petrification within each particular legacy system and provides guidance upon how to address them. That guidance comes in part from a personalised catalogue of petrifying patterns, which capture recurring themes underlying petrification. These steer us to the problems actually present in a given legacy system, and lead us to suitable antidote productive patterns via which we can deal with those problems one by one. To prevent newly adaptable systems from again degrading into legacy systems, we appeal to a follow-on process, termed Productive Evolution, which embraces and keeps pace with change rather than resisting and falling behind it. Productive Evolution teaches us to be vigilant against signs of system petrification and helps us to nip them in the bud. The aim is to nurture systems that remain supportive of the business, that are adaptable in step with ongoing requirements change, and that continue to retain their value as significant business assets
Industry 4.0: product digital twins for remanufacturing decision-making
Currently there is a desire to reduce natural resource consumption and expand circular business principles whilst Industry 4.0 (I4.0) is regarded as the evolutionary and potentially disruptive movement of technology, automation, digitalisation, and data manipulation into the industrial sector. The remanufacturing industry is recognised as being vital to the circular economy (CE) as it extends the in-use life of products, but its synergy with I4.0 has had little attention thus far. This thesis documents the first investigating into I4.0 in remanufacturing for a CE contributing a design and demonstration of a model that optimises remanufacturing planning using data from different instances in a product’s life cycle.
The initial aim of this work was to identify the I4.0 technology that would enhance the stability in remanufacturing with a view to reducing resource consumption. As the project progressed it narrowed to focus on the development of a product digital twin (DT) model to support data-driven decision making for operations planning. The model’s architecture was derived using a bottom-up approach where requirements were extracted from the identified complications in production planning and control that differentiate remanufacturing from manufacturing. Simultaneously, the benefits of enabling visibility of an asset’s through-life health were obtained using a DT as the modus operandi. A product simulator and DT prototype was designed to use Internet of Things (IoT) components, a neural network for remaining life estimations and a search algorithm for operational planning optimisation. The DT was iteratively developed using case studies to validate and examine the real opportunities that exist in deploying a business model that harnesses, and commodifies, early life product data for end-of-life processing optimisation. Findings suggest that using intelligent programming networks and algorithms, a DT can enhance decision-making if it has visibility of the product and access to reliable remanufacturing process information, whilst existing IoT components provide rudimentary “smart” capabilities, but their integration is complex, and the durability of the systems over extended product life cycles needs to be further explored
Platform protocol place: a practice-based study of critical media art practice (2007-2020)
This practice-based research project focuses on critical media art practices in contemporary digital culture. The theoretical framework employed in this inquiry draws from the work of the Frankfurt School, in particular Theodor Adorno and Max Horkheimer’s The Culture Industry: Enlightenment as Mass Deception. Using Adorno & Horkheimer’s thesis as a theoretical guide, this research project formulates the concept of the digital culture industry - a concept that refers to the contemporary era of networked capitalism, an era defined by the unprecedented extraction, accumulation and manipulation of data and the material and digital infrastructures that facilitate it. This concept is used as a framing mechanism that articulates certain techno-political concerns within networked capitalism and responds to them through practice.
The second concept formulated within this research project is Platform Protocol Place. The function of this second concept is to frame and outline the body of practice-based work developed in this study. It is also used to make complex technological issues accessible and to communicate these issues through public exhibition and within this written thesis.
The final concept developed in this research project is tactical media archaeology. This concept describes the techniques and approaches employed in the development of the body of practice-based work that are the central focus of this research project. This approach is a synthesis of two subfields of media art practice and theory, tactical media and media archaeology. Through practice, tactical media archaeology critiques the geopolitical machinations and systems beneath the networked devices and interfaces of the digital culture industry
Recommended from our members
UTILIZATION AND EFFECT OF MULTIPLE CONTENT MODALITIES IN ONLINE HIGHER EDUCATION: SHIFTING TRAJECTORIES TOWARD SUCCESS THROUGH UNIVERSAL DESIGN FOR LEARNING
The idea that offering multiple means of representing course content will assist students of all abilities constitutes one pillar of Universal Design for Learning (UDL), a framework intended to address needs of students with disabilities while also holding relevance for all students. The efficacy of this UDL guideline lacks a verified empirical basis and therefore merits rigorous examination. My dissertation investigates the effect on learning outcomes of students using multiple modalities while learning course content (e.g., text, video, audio, interactive, or mixed content), targeting improving educational success for non-traditional online students.
I investigate this effect for older undergraduates from a women’s institution who are predominantly low income and working mothers returning to school, many of whom are racial/ethnic minorities. Notably, challenges resulting from a lack of disability diagnosis and accommodation may be prevalent but hidden among these students. Traditional higher education typically does not serve such students well. Use of multiple modalities in class activities holds potential for improving their outcomes.
Results show positive effects of using multiple modalities for learning content in courses across the curriculum presented in an adaptive learning system. Using a within-subjects study design, I found a medium-large positive effect size for knowledge gained across adaptive activities. Using an instrumental variables approach, I found a very large positive effect size for weekly assignment and quiz grades, and results suggest a large positive effect on course grade as well. I illustrate how combining knowledge of this effect with other information from the adaptive learning system and online tutoring in a Bayesian network analysis can predict where students may benefit from tutoring. This can inform potential support recommendations that would be particularly relevant when implementation of UDL-based design does not yet fully address students’ learning needs.
These results provide the first evidence confirming an effect of UDL’s multiple modalities guideline on collegiate learning outcomes and illustrate how this information could be used to provide recommendations to students using a learning analytics perspective. Results have implications for researchers, faculty, course developers, instructional designers, analytics professionals, and institutions aiming to improve learning outcomes through a design-based approach
Influence diagrams for complex litigation
Effective advocacy depends critically on the ability of attorneys to formulate, analyze, and compare rival courses of action. Whereas attorneys have been doing these things for centuries using little more than their gut instincts and experiences, sophisticated decision aids are now available that can improve the way attorneys assess the value of their cases and the strategic decisions that they make. These aids are proving valuable in medicine and business, but they have not impacted legal practice. This Article seeks to correct this oversight by showing how easy-to-use graphical models provide guidance for strategic legal decisions. Beginning with a paradigmatic example of a plain- tiff who must choose between proceeding to trial or settling out of court, the Article shows how decision aids handle the uncertainties and interdependencies that arise when real-world considerations are introduced. In particular, the Article makes the case that influence diagrams, a relative newcomer in the field of decision analysis, should be the decision aid of choice in complex litigation matters
- …