721,273 research outputs found

    Simulating Membrane Systems in Digital Computers

    Get PDF
    * Work partially supported by contribution of EU commission Under The Fifth Framework Programme, project “MolCoNet” IST-2001-32008.Membrane Computing started with the analogy between some processes produced inside the complex structure of living cells and computational processes. In the same way that in other branches of Natural Computing, the model is extracted from nature but it is not clear whether or not the model must come back to nature to be implemented. As in other cases in Natural Computing: Artificial Neural Networks, Genetic Algorithms, etc; the models have been implemented in digital computers. Hence, some papers have been published considering implementation of Membrane Computing in digital computers. This paper introduces an overview in the field of simulation in Membrane Computing

    Causality, Information and Biological Computation: An algorithmic software approach to life, disease and the immune system

    Full text link
    Biology has taken strong steps towards becoming a computer science aiming at reprogramming nature after the realisation that nature herself has reprogrammed organisms by harnessing the power of natural selection and the digital prescriptive nature of replicating DNA. Here we further unpack ideas related to computability, algorithmic information theory and software engineering, in the context of the extent to which biology can be (re)programmed, and with how we may go about doing so in a more systematic way with all the tools and concepts offered by theoretical computer science in a translation exercise from computing to molecular biology and back. These concepts provide a means to a hierarchical organization thereby blurring previously clear-cut lines between concepts like matter and life, or between tumour types that are otherwise taken as different and may not have however a different cause. This does not diminish the properties of life or make its components and functions less interesting. On the contrary, this approach makes for a more encompassing and integrated view of nature, one that subsumes observer and observed within the same system, and can generate new perspectives and tools with which to view complex diseases like cancer, approaching them afresh from a software-engineering viewpoint that casts evolution in the role of programmer, cells as computing machines, DNA and genes as instructions and computer programs, viruses as hacking devices, the immune system as a software debugging tool, and diseases as an information-theoretic battlefield where all these forces deploy. We show how information theory and algorithmic programming may explain fundamental mechanisms of life and death.Comment: 30 pages, 8 figures. Invited chapter contribution to Information and Causality: From Matter to Life. Sara I. Walker, Paul C.W. Davies and George Ellis (eds.), Cambridge University Pres

    SMiT: Local System Administration Across Disparate Environments Utilizing the Cloud

    Get PDF
    System administration can be tedious. Most IT departments maintain several (if not several hundred) computers, each of which requires periodic housecleaning: updating of software, clearing of log files, removing old cache files, etc. Compounding the problem is the computing environment itself. Because of the distributed nature of these computers, system administration time is often consumed in repetitive tasks that should be automated. Although current system administration tools exist, they are often centralized, unscalable, unintuitive, or inflexible. To meet the needs of system administrators and IT professionals, we developed the Script Management Tool (SMiT). SMiT is a web-based tool that permits administration of distributed computers from virtually anywhere via a common web browser. SMiT consists of a cloud-based server running on Google App Engine enabling users to intuitively create, manage, and deploy administration scripts. To support local execution of scripts, SMiT provides an execution engine that runs on the organization’s local machines and communicates with the server to fetch scripts, execute them, and deliver results back to the server. Because of its distributed asynchronous architecture SMiT is scalable to thousands of machines. SMiT is also extensible to a wide variety of system administration tasks via its plugin architecture

    Developing a Contextual Model towards Understanding Low Back Pain

    Get PDF
    Recent advances in mobile computing and sensor technology have provided new opportunities in data collection and analysis, especially in the medical fields of research. Low back pain is a key area within chronic pain management. It is a widespread problem and a major contributor towards disability worldwide. Researchers have concluded that pain can be an individualistic experience. Evidence from other fields of research show that studying the context of the phenomena can allow for a better understanding of its nature. Existing studies may not consider the full context of the patients’ pain, and collect data infrequently (e.g. monthly or yearly). An explanation for this could be due to the cost and difficulty of collecting such data in the past. In this research, we propose a descriptive contextual model that extends a current low back pain model, with contextual attributes and factors. The goal of this research is to provide researchers with a descriptive contextual classification of variables into their respective factors, and to guide future studies in collecting such data, by utilizing advances in mobile and sensor technology

    Time-Based Addiction

    Full text link
    This paper introduces time-based addiction, which refers to excessive engagement in an activity that results in negative outcomes due to the misallocation of time. This type of addiction is often seen in media-related activities such as video games, social media, and television watching. Behavioural design in video games plays a significant role in enabling time-based addiction. Games are designed to be engaging and enjoyable, with features such as rewards, leveling up, and social competition, which is all intended to keep players coming back for more. This article reviews the behavioural design used in video games, and media more broadly, to increase the addictive nature of these experiences. By doing so the article aims to recognise time-based addiction as a problem that in large part stems from irresponsible design practices.Comment: Accepted at the CHI-23 1st Workshop on Behavioural Design in Video Games: Ethical, Legal, and Health Impact on Players held at the CHI Conference on Human Factors in Computing Systems (CHI-23), 8 page

    ONIX: Open Radio Network Information eXchange

    Get PDF
    While video-on-demand still takes up the lion's share of Internet traffic, we are witnessing a significant increase in the adoption of mobile applications defined by tight bit rate and latency requirements (e.g., augmented/virtual reality). Supporting such applications over a mobile network is very challenging due to the unsteady nature of the network and the long distance between the users and the application back-end, which usually sits in the cloud. To address these and other challenges, like security, reliability, and scalability, a new paradigm termed multi-access edge computing (MEC) has emerged. MEC places computational resources closer to the end users, thus reducing the overall end-to-end latency and the utilization of the network backhaul. However, to adapt to the volatile nature of a mobile network, MEC applications need real-time information about the status of the radio channel. The ETSI-defined radio network information service (RNIS) is in charge of providing MEC applications with up-to-date information about the radio network. In this article, we first discuss three use cases that can benefit from the RNIS (collision avoidance, media streaming, and Industrial Internet of Things). Then we analyze the requirements and challenges underpinning the design of a scalable RNIS platform, and report on a prototype implementation and its evaluation. Finally, we provide a roadmap of future research challenges

    LEARNING ABOUT AMBIGUOUS TECHNOLOGIES: CONCEPTUALIZATION AND RESEARCH AGENDA

    Get PDF
    Information Technologies (IT) have gradually transformed into complex digital artefacts with blurred and constantly changing functional boundaries. While this shift offers promising venues that unfold in front of our eyes every day, it also challenges the deeply entrenched knowledge structures on which ordinary users rely to learn about unfamiliar technologies. We propose to take a step back in order to theorize the ambiguous nature of modern IT and to speculate on how users learn to use them. This paper revisits a wide array of management (BYOD, Gamification) and IS design trends (generativity, everyday computing, incompleteness) through the lens of the categorization framework. Our review of the literature on ambiguous products suggests that users exposed to ambiguous technologies may experience a categorization difficulty that disrupts the process of learning how to use them. This difficulty stems from a user’s belief that there are multiple or inconsistent interpretations of why and how to use an IT, as well as a perception that a given IT has some attributes in common with one or several seemingly unrelated ITs. We build on this theorization to propose a research agenda and discuss the expected practical implications of this path of research
    • …
    corecore