72 research outputs found
2D visual codes: why are they not everywhere?
One key characteristic of ubiquitous computing is the disappearing boundary between physical and virtual elements,
a mindset shift from interaction with the computer to the interaction with the environment. 2D visual
codes are an important enabling technology for this increasing integration between physical spaces and virtual.
However, despite the availability of a broad range of technologies for 2D visual codes, their common usage is
still far from being a reality. In this work, we explore some of the factors that may influence the adoption of such
interaction techniques. The study was based on the development of a prototype in which a set of applications
was made available through interaction with visual codes. The prototype was deployed for three months in a
public setting where users could try this technology for themselves. The results from the study suggest that visual
codes are seen as a simple interaction model, but still some brief initial introduction may be needed. The study
has also highlighted some functional limitations and strong technical constraints that proved to be very demanding
when considered in the context of a real scenario and using people’s own devices. Although the curiosity
factor plays very favourable to the visual codes, its generalized adoption will be difficult or, at least, will not
happen as spontaneously as a simple demo may initially suggest
The MegaM@Rt2 ECSEL project: MegaModelling at runtime-scalable model-based framework for continuous development and runtime validation of complex systems
A major challenge for the European electronic components and systems (ECS) industry is to increase productivity and reduce costs while ensuring safety and quality. Model-Driven Engineering (MDE) principles have already shown valuable capabilities for the development of ECSs but still need to scale to support real-world scenarios implied by the full deployment and use of complex electronic systems, such as Cyber-Physical Systems, and real-time systems. Moreover, maintaining efficient traceability, integration and communication between fundamental stages of the development lifecycle (i.e., design time and runtime) is another challenge to the scalability of MDE tools and techniques. This paper presents “MegaModelling at runtime – Scalable model-based framework for continuous development and runtime validation of complex systems” (MegaM@Rt2), an ECSEL–JU project whose main goal is to address the above mentioned challenges. Driven by both large and small industrial enterprises, with the support of research partners and technology providers, MegaM@Rt2aims to deliver a framework of tools and methods for: (i) system engineering/design and continuous development, (ii) related runtime analysis, and (iii) global model and traceability management.This project has received funding from the Electronic Component Systems for European Leadership Joint Undertaking under grant agreement No. 737494. This Joint Undertaking receives support from the European Union’s Horizon 2020 research and innovation program and from Sweden, France, Spain, Italy, Finland & Czech Republic
The Role of [email protected] in Autonomic Systems:keynote
Autonomic systems manage their own behaviour in accordance with high-level goals. This paper presents a brief outline of challenges related to Autonomic Computing due to uncertainty in the operational environments, and the role that [email protected] play in meeting them. We argue that the existing progress in Autonomic Computing can be further exploited with the support of runtime models. We briefly discuss our ideas related to the need to understand the extent to which the high-level goals of the autonomic system are being satisfied to support decision-making based on runtime evidence and, the need to support self-explanation
Unfolding the In-between Image: The Emergence of an Incipient Image at the Intersection of Still and Moving Images
As digital technology has transformed various aspects of our screen culture over the past few decades, we have been witnessing a disappearing boundary between photographic still images and cinematic moving images. An emerging in-between image has become increasingly prominent in this new image culture, which attempts to negotiate the grey area between stillness and movement. This in-between image, manifest in a variety of formats and media, points to an increasingly solid middle ground between the traditional divisions of still and moving images. This paper builds a conceptual framework for analysing this new type of image and explores both the roots of this emergent category before focusing on its contemporary trajectory as exemplified by the work of Adad Hannah, Hiroshi Sugimoto, Jeff Wall, and James Nares. </p
Software engineering processes for self-adaptive systems
In this paper, we discuss how for self-adaptive systems some activities that traditionally occur at development-time are moved to run-time. Responsibilities for these activities shift from software engineers to the system itself, causing the traditional boundary between development-time and run-time to blur. As a consequence, we argue how the traditional software engineering process needs to be reconceptualized to distinguish both development-time and run-time activities, and to support designers in taking decisions on how to properly engineer such systems. Furthermore, we identify a number of challenges related to this required reconceptualization, and we propose initial ideas based on process modeling. We use the Software and Systems Process Engineering Meta-Model (SPEM) to specify which activities are meant to be performed off-line and on-line, and also the dependencies between them. The proposed models should capture information about the costs and benefits of shifting activities to run-time, since such models should support software engineers in their decisions when they are engineering self-adaptive systems
Ascent of Asymmetric Risk in Information Security: An Initial Evaluation.
Dramatic changes in the information security risk landscape over several decades have not yet been matched by similar changes in organizational information security, which is still mainly based on a mindset that security is achieved through extensive preventive controls. As a result, maintenance cost of information security is increasing rapidly, but this increased expenditure has not really made an attack more difficult. The opposite seems to be true, information security attacks have become easier to perpetrate and appear more like information warfare tactics. At the same time, the damage caused by a successful attack has increased significantly and may sometimes become critical to an organization. In this paper an extremely asymmetric risk is evaluated where a strongly motivated attacker unleashes a prolonged attack on an organization with the aim to do maximum damage. It is suggested that the probability of such an attack is increasing. The reason why preventive controls are unlikely to ever be effective against such an attack is discussed as well and proposals are made towards more advanced strategies that aim to limit the damage when such an attack occurs. One crucial lesson to be learned for those organizations that are dependent on their information security, such as critical infrastructure organizations, is the need to deny motivated attackers access to any information about the success of their attack. Successful deception in this area is likely to significantly reduce any potential escalation of the incident
Leaky Rigid Lid: New Dissipative Modes in the Troposphere
Abstract
An effective boundary condition is derived for the top of the troposphere, based on a wave radiation condition at the tropopause. This boundary condition, which can be formulated as a pseudodifferential equation, leads to new vertical dissipative modes. These modes can be computed explicitly in the classical setup of a hydrostatic, nonrotating atmosphere with a piecewise constant Brunt–Väisälä frequency.
In the limit of an infinitely strongly stratified stratosphere, these modes lose their dissipative nature and become the regular baroclinic tropospheric modes under the rigid-lid approximation. For realistic values of the stratification, the decay time scales of the first few modes for mesoscale disturbances range from an hour to a week, suggesting that the time scale for some atmospheric phenomena may be set up by the rate of energy loss through upward-propagating waves.</jats:p
Subwavelength InSb-based Slot wavguides for THz transport: concept and practical implementations
Seeking better surface plasmon polariton (SPP) waveguides is of critical importance to construct the frequency-agile terahertz (THz) front-end circuits. We propose and investigate here a new class of semiconductor-based slot plasmonic waveguides for subwavelength THz transport. Optimizations of the key geometrical parameters demonstrate its better guiding properties for simultaneous realization of long propagation lengths (up to several millimeters) and ultra-tight mode confinement (similar to lambda(2)/530) in the THz spectral range. The feasibility of the waveguide for compact THz components is also studied to lay the foundations for its practical implementations. Importantly, the waveguide is compatible with the current complementary metal-oxide-semiconductor (CMOS) fabrication technique. We believe the proposed waveguide configuration could offer a potential for developing a CMOS plasmonic platform and can be designed into various components for future integrated THz circuits (ITCs).Web of Science6art. no. 3878
From Self-Adaptation to Self-Evolution Leveraging the Operational Design Domain
Engineering long-running computing systems that achieve their goals under
ever-changing conditions pose significant challenges. Self-adaptation has shown
to be a viable approach to dealing with changing conditions. Yet, the
capabilities of a self-adaptive system are constrained by its operational
design domain (ODD), i.e., the conditions for which the system was built
(requirements, constraints, and context). Changes, such as adding new goals or
dealing with new contexts, require system evolution. While the system evolution
process has been automated substantially, it remains human-driven. Given the
growing complexity of computing systems, human-driven evolution will eventually
become unmanageable. In this paper, we provide a definition for ODD and apply
it to a self-adaptive system. Next, we explain why conditions not covered by
the ODD require system evolution. Then, we outline a new approach for
self-evolution that leverages the concept of ODD, enabling a system to evolve
autonomously to deal with conditions not anticipated by its initial ODD. We
conclude with open challenges to realise self-evolution.Comment: 7 page
- …