596 research outputs found
A novel approach to assess minimally invasive surgical device failure utilizing adverse event outcome severity and design complexity.
Medical device failure and misuse have the potential to cause serious injury and death. Given the intricate nature of the instruments utilized specifically in minimally invasive surgery (MIS), users and manufacturers of surgical devices share a responsibility in preventing user error and device failure. A novel approach was presented for the evaluation of minimally invasive device failures, which involved assessing the severity of adverse event outcomes associated with the failures modes and investigating aspects of the devices’ design that may contribute to failure. The goals of this research were to 1) characterize the design attributes, failure modes, and adverse events associated with minimally invasive surgical devices and 2) describe the relationship between minimally invasive surgical device design complexity and the severity of adverse events. The types of failure modes, phases of operation in which failure occurs, severity of adverse event outcomes, and design complexity associated with four minimally invasive surgical devices were determined. An association was shown to exist between phases of surgical device operation and the severity of outcomes that occur in each phase (p \u3c 0.05). Across both device types, the majority of failure occurred during execution of the devices’ main function which involved securing and transecting tissue. The least amount of failures occurred during the results and post-op phase of operation; however, the failures that occurred during this phase resulted in the highest average outcome severity. The endoscopic staplers assessed resulted in overall higher average outcome severities relative to that of the tissue sealers. The methods employed are the first to evaluate medical device design, function, and failure outcomes from a complexity perspective. While statistical conclusions regarding the overall research goal could not be drawn, heuristic methods support development of the approach presented. The work herein assists the enhancement of risk awareness and prevention techniques and serves as a contribution to filling the knowledge gap regarding device use and failure outcomes. Bridging the gap between surgeons and engineers is crucial to the successful implementation and evaluation of new technology in the operating room, which was an essential component of this research
A structural empirical model of firm growth, learning, and survival
In this paper we develop an empirical model of entrepreneurs' business continuation decisions, and we estimate its parameters using a new panel of monthly alcohol tax returns from bars in the state of Texas. In our data, entrepreneurial failure is frequent and predictable. In the first year of life, 20% of our sample's bars exit, and these tend to be smaller than average. In the model, an entrepreneur bases her business continuation decision on potentially noisy signals of her bar's future profits. The presence of noise implies that she should make her decision based on both current and past realizations of the signal. We observe for each bar its sales, which we assume, equals a noisy version of the entrepreneur's signal. That is, the entrepreneur's information about her bar is private. ; The entrepreneur's private information makes the estimation of our model challenging, because we cannot observe the inputs into her decision process. Nevertheless, we are able to recover from our observations the parameters characterizing the entrepreneur's learning process and the noise contaminating publicly available sales observations. The key to our analysis is to note that our ability to forecast the entrepreneur's decisions reveals the amount of noise contaminating publicly available sales observations. We infer that public and private information differ little if we can forecast entrepreneurs' business continuation decisions well. With this information, we can then determine whether the usefulness of past sales observations for forecasting future sales arises only from the noise contaminating public observations or if the observations imply the presence of additional noise contaminating entrepreneurs' observations. ; We estimate our model using observations from the first twelve months of life for approximately 300 Texas bars. We find that entrepreneurs observe the persistent component of profit without error. In this sense, their information is substantially superior to the public's.Business enterprises ; Corporations
Rule-based Methodologies for the Specification and Analysis of Complex Computing Systems
Desde los orígenes del hardware y el software hasta la época actual, la complejidad
de los sistemas de cálculo ha supuesto un problema al cual informáticos, ingenieros
y programadores han tenido que enfrentarse. Como resultado de este esfuerzo han
surgido y madurado importantes áreas de investigación. En esta disertación abordamos
algunas de las líneas de investigación actuales relacionada con el análisis y
la verificación de sistemas de computación complejos utilizando métodos formales y
lenguajes de dominio específico.
En esta tesis nos centramos en los sistemas distribuidos, con un especial interés por
los sistemas Web y los sistemas biológicos. La primera parte de la tesis está dedicada
a aspectos de seguridad y técnicas relacionadas, concretamente la certificación del
software. En primer lugar estudiamos sistemas de control de acceso a recursos y proponemos
un lenguaje para especificar políticas de control de acceso que están fuertemente
asociadas a bases de conocimiento y que proporcionan una descripción sensible
a la semántica de los recursos o elementos a los que se accede. También hemos desarrollado
un marco novedoso de trabajo para la Code-Carrying Theory, una metodología
para la certificación del software cuyo objetivo es asegurar el envío seguro de código
en un entorno distribuido. Nuestro marco de trabajo está basado en un sistema de
transformación de teorías de reescritura mediante operaciones de plegado/desplegado.
La segunda parte de esta tesis se concentra en el análisis y la verificación de sistemas
Web y sistemas biológicos. Proponemos un lenguaje para el filtrado de información
que permite la recuperación de informaciones en grandes almacenes de datos. Dicho
lenguaje utiliza información semántica obtenida a partir de ontologías remotas
para re nar el proceso de filtrado. También estudiamos métodos de validación para
comprobar la consistencia de contenidos web con respecto a propiedades sintácticas
y semánticas. Otra de nuestras contribuciones es la propuesta de un lenguaje que
permite definir y comprobar automáticamente restricciones semánticas y sintácticas
en el contenido estático de un sistema Web. Finalmente, también consideramos los
sistemas biológicos y nos centramos en un formalismo basado en lógica de reescritura
para el modelado y el análisis de aspectos cuantitativos de los procesos biológicos.
Para evaluar la efectividad de todas las metodologías propuestas, hemos prestado
especial atención al desarrollo de prototipos que se han implementado utilizando
lenguajes basados en reglas.Baggi ., M. (2010). Rule-based Methodologies for the Specification and Analysis of Complex Computing Systems [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/8964Palanci
Recommended from our members
Enabling Resilience in Cyber-Physical-Human Water Infrastructures
Rapid urbanization and growth in urban populations have forced community-scale infrastructures (e.g., water, power and natural gas distribution systems, and transportation networks) to operate at their limits. Aging (and failing) infrastructures around the world are becoming increasingly vulnerable to operational degradation, extreme weather, natural disasters and cyber attacks/failures. These trends have wide-ranging socioeconomic consequences and raise public safety concerns. In this thesis, we introduce the notion of cyber-physical-human infrastructures (CPHIs) - smart community-scale infrastructures that bridge technologies with physical infrastructures and people. CPHIs are highly dynamic stochastic systems characterized by complex physical models that exhibit regionwide variability and uncertainty under disruptions. Failures in these distributed settings tend to be difficult to predict and estimate, and expensive to repair. Real-time fault identification is crucial to ensure continuity of lifeline services to customers at adequate levels of quality. Emerging smart community technologies have the potential to transform our failing infrastructures into robust and resilient future CPHIs.In this thesis, we explore one such CPHI - community water infrastructures. Current urban water infrastructures, that are decades (sometimes over a 100 years) old, encompass diverse geophysical regimes. Water stress concerns include the scarcity of supply and an increase in demand due to urbanization. Deterioration and damage to the infrastructure can disrupt water service; contamination events can result in economic and public health consequences. Unfortunately, little investment has gone into modernizing this key lifeline.To enhance the resilience of water systems, we propose an integrated middleware framework for quick and accurate identification of failures in complex water networks that exhibit uncertain behavior. Our proposed approach integrates IoT-based sensing, domain-specific models and simulations with machine learning methods to identify failures (pipe breaks, contamination events). The composition of techniques results in cost-accuracy-latency tradeoffs in fault identification, inherent in CPHIs due to the constraints imposed by cyber components, physical mechanics and human operators. Three key resilience problems are addressed in this thesis; isolation of multiple faults under a small number of failures, state estimation of the water systems under extreme events such as earthquakes, and contaminant source identification in water networks using human-in-the-loop based sensing. By working with real world water agencies (WSSC, DC and LADWP, LA), we first develop an understanding of operations of water CPHI systems. We design and implement a sensor-simulation-data integration framework AquaSCALE, and apply it to localize multiple concurrent pipe failures. We use a mixture of infrastructure measurements (i.e., historical and live water pressure/flow), environmental data (i.e., weather) and human inputs (i.e., twitter feeds), combined and enhanced with the domain model and supervised learning techniques to locate multiple failures at fine levels of granularity (individual pipeline level) with detection time reduced by orders of magnitude (from hours/days to minutes). We next consider the resilience of water infrastructures under extreme events (i.e., earthquakes) - the challenge here is the lack of apriori knowledge and the increased number and severity of damages to infrastructures. We present a graphical model based approach for efficient online state estimation, where the offline graph factorization partitions a given network into disjoint subgraphs, and the belief propagation based inference is executed on-the-fly in a distributed manner on those subgraphs. Our proposed approach can isolate 80% broken pipes and 99% loss-of-service to end-users during an earthquake.Finally, we address issues of water quality - today this is a human-in-the-loop process where operators need to gather water samples for lab tests. We incorporate the necessary abstractions with event processing methods into a workflow, which iteratively selects and refines the set of potential failure points via human-driven grab sampling. Our approach utilizes Hidden Markov Model based representations for event inference, along with reinforcement learning methods for further refining event locations and reducing the cost of human efforts.The proposed techniques are integrated into a middleware architecture, which enables components to communicate/collaborate with one another. We validate our approaches through a prototype implementation with multiple real-world water networks, supply-demand patterns from water utilities and policies set by the U.S. EPA. While our focus here is on water infrastructures in a community, the developed end-to-end solution is applicable to other infrastructures and community services which operate in disruptive and resource-constrained environments
Third Workshop on Modelling of Objects, Components, and Agents
This booklet contains the proceedings of the Third International Workshop on Modelling of Objects, Components, and Agents (MOCA'04), October 11-13, 2004. The workshop is organised by the CPN group at the Department of Computer Science, University of Aarhus, Denmark and the "Theoretical Foundations of Computer Science" group at the University of Hamburg. The home page of the workshop is: http://www.daimi.au.dk/CPnets/workshop0
Towards a Framework for Proving Termination of Maude Programs
Maude es un lenguaje de programación declarativo basado en la lógica de reescritura
que incorpora muchas características que lo hacen muy potente. Sin
embargo, a la hora de probar ciertas propiedades computacionales esto conlleva
dificultades. La tarea de probar la terminación de sistemas de reesctritura
es de hecho bastante dura, pero aplicada a lenguajes de programación reales
se concierte en más complicada debido a estas características inherentes. Esto
provoca que métodos para probar la terminación de este tipo de programas
requieran técnicas específicas y un análisis cuidadoso. Varios trabajos han intentado
probar terminación de (un subconjunto de) programas Maude. Sin
embargo, todos ellos siguen una aproximación transformacional, donde el programa
original es trasformado hasta alcanzar un sistema de reescritura capaz
de ser manejado con las técnicas y herramientas de terminación existentes. En
la práctica, el hecho de transformar los sistemas originales suele complicar la
demostración de la terminación ya que esto introduce nuevos símbolos y reglas
en el sistema. En esta tesis, llevamos a cabo el problema de probar terminación
de (un subconjunto de) programas Maude mediante métodos directos.
Por un lado, nos centramos en la estrategia de Maude. Maude es un lenguaje
impaciente donde los argumentos de una función son evaluados siempre
antes de la aplicación de la función que los usa. Esta estrategia (conocida como
llamada por valor) puede provocar la no terminación si los programas no
están escritos cuidadosamente. Por esta razón, Maude (en concreto) incorpora
mecanismos para controlar la ejecución de programas como las anotaciones
sintácticas que están asociadas a los argumentos de los símbolos. En reescritura,
esta estrategia sería conocida como reescritura sensible al contexto
innermost (RSCI).
Por otro lado, Maude también incorpora la posibilidad de declarar atributos.Alarcón Jiménez, B. (2011). Towards a Framework for Proving Termination of Maude Programs [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/11003Palanci
Modeling, verification, and analysis of timed actor-based models
In the recent years, formal modeling and verification of realtime systems have become very important. Difficult-to-use modeling languages and inefficient analysis tools are the main obstacles to use formal methods in this domain. Timed actor model is one of the modeling paradigms which is proposed for modeling of realtime systems. It benefits from high-level object-oriented modeling facilities; however, developed analysis techniques for timed actors needs to be improved to make the actor model acceptable for the analysis of real-world applications.
In this thesis, we first tackle the model checking problem of timed actors by proposing the standard semantics of timed actors in terms of fine-grained timed transition system (FGTS) and transforming it to Durational Transition Graph (DTG). This way, while the time complexity of model checking algorithms for TCTL properties, in general, is non-polynomial, we are able to check TCTL properties (a subset of TCTL) using model checking in polynomial time. We also improve the model checking algorithm of TCTL properties, obtaining time complexity of O((V lg V+E) |Φ|) instead of O(V(V+E)|Φ|) and use it for efficient model checking of timed actors. In addition, we propose a reduction technique which safely eliminates instantaneous transitions of FGTS. Using the proposed reduction technique, we provide an efficient algorithm for model checking of complete TCTL properties over the reduced transition systems.
In actor-based models, the absence of shared variables and the presence of single-threaded actors along with non-preemptive execution of each message server, ensure that the execution of message servers do not interfere with each other. Based on this observation, we propose Floating Time Transition System (FTTS) as the big-step semantics of timed actors. The big-step semantics exploits actor features for relaxing the synchronization of progressof time among actors, and thereby reducing the number of states in transition systems. Considering an actor-based language, we prove there is an action-based weak bisimulation relation between FTTS and FGTS. As a result, the big-step semantics preserves event-based branching-time properties.
Finally, we show how Timed Rebeca and FTTS are used as the back-end analysis technique of three different independent works to illustrate the applicability of FTTS in practice.The work on this dissertation was supported by the project “Self-Adaptive Actors:SEADA” (nr. 163205-051) of the Icelandic Research Fund
Software engineering perspectives on physiological computing
Physiological computing is an interesting and promising concept to widen the communication channel between the (human) users and computers, thus allowing an increase of software systems' contextual awareness and rendering software systems smarter than they are today. Using physiological inputs in pervasive computing systems allows re-balancing the information asymmetry between the human user and the computer system: while pervasive computing systems are well able to flood the user with information and sensory input (such as sounds, lights, and visual animations), users only have a very narrow input channel to computing systems; most of the time, restricted to keyboards, mouse, touchscreens, accelerometers and GPS receivers (through smartphone usage, e.g.). Interestingly, this information asymmetry often forces the user to subdue to the quirks of the computing system to achieve his goals -- for example, users may have to provide information the software system demands through a narrow, time-consuming input mode that the system could sense implicitly from the human body. Physiological computing is a way to circumvent these limitations; however, systematic means for developing and moulding physiological computing applications into software are still unknown.
This thesis proposes a methodological approach to the creation of physiological computing applications that makes use of component-based software engineering. Components help imposing a clear structure on software systems in general, and can thus be used for physiological computing systems as well. As an additional bonus, using components allow physiological computing systems to leverage reconfigurations as a means to control and adapt their own behaviours. This
adaptation can be used to adjust the behaviour both to the human and to the available computing environment in terms of resources and available devices - an activity that is crucial for complex physiological computing systems. With the help of components and reconfigurations, it is possible to structure the functionality of physiological computing applications in a way that makes them manageable and extensible, thus allowing a stepwise and systematic extension of a system's intelligence.
Using reconfigurations entails a larger issue, however. Understanding and fully capturing the behaviour of a system under reconfiguration is challenging, as the system may change its structure in ways that are difficult to fully predict. Therefore, this thesis also introduces a means for formal verification of reconfigurations based on assume-guarantee contracts. With the proposed assume-guarantee contract framework, it is possible to prove that a given system design (including component behaviours and reconfiguration specifications) is satisfying real-time properties expressed as assume-guarantee contracts using a variant of real-time linear temporal logic introduced in this thesis - metric interval temporal logic for reconfigurable systems.
Finally, this thesis embeds both the practical approach to the realisation of physiological computing systems and formal verification of reconfigurations into Scrum, a modern and agile software development methodology. The surrounding methodological approach is intended to provide a frame for the systematic development of physiological computing systems from first psychological findings to a working software system with both satisfactory functionality and software quality aspects.
By integrating practical and theoretical aspects of software engineering into a self-contained development methodology, this thesis proposes a roadmap and guidelines for the creation of new physiological computing applications.Physiologisches Rechnen ist ein interessantes und vielversprechendes Konzept zur Erweiterung des Kommunikationskanals zwischen (menschlichen) Nutzern und
Rechnern, und dadurch die Berücksichtigung des Nutzerkontexts in Software-Systemen zu verbessern und damit Software-Systeme intelligenter zu gestalten, als sie es heute sind. Physiologische Eingangssignale in ubiquitären Rechensystemen zu verwenden, ermöglicht eine Neujustierung der Informationsasymmetrie, die heute zwischen Menschen und Rechensystemen existiert: Während ubiquitäre Rechensysteme sehr wohl in der Lage sind, den Menschen mit Informationen und sensorischen Reizen zu überfluten (z.B. durch Töne, Licht und visuelle Animationen), hat der Mensch nur sehr begrenzte Einflussmöglichkeiten zu Rechensystemen. Meistens stehen nur Tastaturen, die Maus, berührungsempfindliche Bildschirme, Beschleunigungsmesser und GPS-Empfänger (zum Beispiel durch Mobiltelefone oder digitale Assistenten) zur Verfügung. Diese Informationsasymmetrie zwingt die Benutzer zur Unterwerfung unter die Usancen der Rechensysteme, um ihre Ziele zu erreichen - zum Beispiel müssen Nutzer Daten manuell eingeben, die auch aus Sensordaten des menschlichen
Körpers auf unauffällige weise erhoben werden können. Physiologisches Rechnen ist eine Möglichkeit, diese Beschränkung zu umgehen. Allerdings fehlt eine systematische Methodik für die Entwicklung physiologischer Rechensysteme bis zu fertiger Software.
Diese Dissertation präsentiert einen methodischen Ansatz zur Entwicklung physiologischer Rechenanwendungen, der auf der komponentenbasierten Softwareentwicklung aufbaut. Der komponentenbasierte Ansatz hilft im Allgemeinen dabei, eine klare Architektur des Software-Systems zu definieren, und kann deshalb auch für physiologische Rechensysteme angewendet werden. Als zusätzlichen Vorteil erlaubt die Komponentenorientierung in physiologischen Rechensystemen, Rekonfigurationen als Mittel zur Kontrolle und Anpassung des
Verhaltens von physiologischen Rechensystemen zu verwenden. Diese Adaptionstechnik kann genutzt werden um das Verhalten von physiologischen Rechensystemen an den Benutzer anzupassen, sowie an die verfügbare Recheninfrastruktur im Sinne von Systemressourcen und Geräten - eine Maßnahme,
die in komplexen physiologischen Rechensystemen entscheidend ist. Mit Hilfe der Komponentenorientierung und von Rekonfigurationen wird es möglich, die Funktionalität von physiologischen Rechensystemen so zu strukturieren, dass das
System wartbar und erweiterbar bleibt. Dadurch wird eine schrittweise und systematische Erweiterung der Funktionalität des Systems möglich.
Die Verwendung von Rekonfigurationen birgt allerdings Probleme. Das Systemverhalten eines Software-Systems, das Rekonfigurationen unterworfen ist zu verstehen und vollständig einzufangen ist herausfordernd, da das System seine Struktur auf schwer vorhersehbare Weise verändern kann. Aus diesem Grund führt diese Arbeit eine Methode zur formalen Verifikation von Rekonfigurationen auf Grundlage von Annahme-Zusicherungs-Verträgen ein. Mit dem vorgeschlagenen Annahme-Zusicherungs-Vertragssystem ist es möglich zu beweisen, dass ein gegebener Systementwurf (mitsamt Komponentenverhalten und Spezifikation des
Rekonfigurationsverhaltens) eine als Annahme-Zusicherungs-Vertrag spezifizierte Echtzeiteigenschaft erfüllt. Für die Spezifikation von Echtzeiteigenschaften kann eine Variante von linearer Temporallogik für Echtzeit verwendet werden, die in dieser Arbeit eingeführt wird: Die metrische Intervall-Temporallogik für rekonfigurierbare Systeme.
Schließlich wird in dieser Arbeit sowohl ein praktischer Ansatz zur Realisierung von physiologischen Rechensystemen als auch die formale Verifikation von Rekonfigurationen in Scrum eingebettet, einer modernen und agilen Softwareentwicklungsmethodik. Der methodische Ansatz bietet einen Rahmen für die systematische Entwicklung physiologischer Rechensysteme von Erkenntnissen zur menschlichen Physiologie hin zu funktionierenden physiologischen Softwaresystemen mit zufriedenstellenden funktionalen und qualitativen Eigenschaften.
Durch die Integration sowohl von praktischen wie auch theoretischen Aspekten der Softwaretechnik in eine vollständige Entwicklungsmethodik bietet diese Arbeit
einen Fahrplan und Richtlinien für die Erstellung neuer physiologischer Rechenanwendungen
Model-based Quality Assurance of Cyber-Physical Systems with Variability in Space, over Time and at Runtime
Cyber-physical systems (CPS) are frequently characterized by three essential properties: CPS perform complex computations, CPS conduct control tasks involving continuous data- and signal-processing, and CPS are (parts of) distributed, and even mobile, communication systems. In addition, modern software systems like CPS have to cope with ever-growing extents of variability, namely
variability in space by means of predefined configuration options (e.g., software product lines), variability at runtime by means of preplanned reconfigurations (e.g., runtime-adaptive systems),
and variability over time by means of initially unforeseen updates to new versions (e.g., software evolution). Finally, depending on the particular application domain, CPS often constitute safety- and mission-critical parts of socio-technical systems. Thus, novel quality-assurance methodologies are required to systematically cope with the interplay between the different CPS characteristics on the one hand, and the different dimensions of variability on the other hand. This thesis gives an overview on recent research and open challenges in model-based specification and quality-assurance of CPS in the presence of variability. The main focus of this thesis is laid on computation and communication aspects of CPS, utilizing evolving dynamic software product lines as engineering methodology and model-based testing as quality-assurance technique. The research is illustrated and evaluated by means of case studies from different application domains
- …