3,010 research outputs found
Monetary and Fiscal Policies in an Open Economy
The central theme of this paper is that international linkages between national economies influence, in fundamentally important ways, the effectiveness and proper conduct of national macroeconomic policies. Specifically, our purpose is to summarize the implications for the conduct of macroeconomic policies in open economies of both the traditional approach to open economy macroeconomics (as developed largely by James Meade, Robert Mundell, and J. Marcus Fleming) and of more recent developments. Our discussion is organized around three key linkages between national economies: through commodity trade; through capital mobility; and through exchange of national monies. These linkages have important implications concerning the effects of macroeconomic policies in open economies that differ from the effects of such policies in closed economies. Recent developments in the theory of macroeconomic policy have established conditions for the effectiveness of policies in influencing output and employment which emphasize the distinction between anticipated and unanticipated policy actions, the importance of incomplete information, and the consequences of contracts that fix nominal wages and prices over finite intervals. In this paper, we shall not analyze how these conditions are modified in an open economy. However, since our concern is with macro-economic policy, a principal objective of which is to influence output and employment, we shall assume that requisite conditions for such influence are satisfied.
The separate neural control of hand movements and contact forces
To manipulate an object, we must simultaneously control the contact forces exerted on the object and the movements of our hand. Two alternative views for manipulation have been proposed: one in which motions and contact forces are represented and controlled by separate neural processes, and one in which motions and forces are controlled jointly, by a single process. To evaluate these alternatives, we designed three tasks in which subjects maintained a specified contact force while their hand was moved by a robotic manipulandum. The prescribed contact force and hand motions were selected in each task to induce the subject to attain one of three goals: (1) exerting a regulated contact force, (2) tracking the motion of the manipulandum, and (3) attaining both force and motion goals concurrently. By comparing subjects' performances in these three tasks, we found that behavior was captured by the summed actions of two independent control systems: one applying the desired force, and the other guiding the hand along the predicted path of the manipulandum. Furthermore, the application of transcranial magnetic stimulation impulses to the posterior parietal cortex selectively disrupted the control of motion but did not affect the regulation of static contact force. Together, these findings are consistent with the view that manipulation of objects is performed by independent brain control of hand motions and interaction forces
Learning Redundant Motor Tasks With and Without Overlapping Dimensions: Facilitation and Interference Effects
Prior learning of a motor skill creates motor memories that can facilitate or interfere with learning of new, but related, motor skills. One hypothesis of motor learning posits that for a sensorimotor task with redundant degrees of freedom, the nervous system learns the geometric structure of the task and improves performance by selectively operating within that task space. We tested this hypothesis by examining if transfer of learning between two tasks depends on shared dimensionality between their respective task spaces. Human participants wore a data glove and learned to manipulate a computer cursor by moving their fingers. Separate groups of participants learned two tasks: a prior task that was unique to each group and a criterion task that was common to all groups. We manipulated the mapping between finger motions and cursor positions in the prior task to define task spaces that either shared or did not share the task space dimensions (x-y axes) of the criterion task. We found that if the prior task shared task dimensions with the criterion task, there was an initial facilitation in criterion task performance. However, if the prior task did not share task dimensions with the criterion task, there was prolonged interference in learning the criterion task due to participants finding inefficient task solutions. These results show that the nervous system learns the task space through practice, and that the degree of shared task space dimensionality influences the extent to which prior experience transfers to subsequent learning of related motor skills
Quantifying the security risk of discovering and exploiting software vulnerabilities
2016 Summer.Includes bibliographical references.Most of the attacks on computer systems and networks are enabled by vulnerabilities in a software. Assessing the security risk associated with those vulnerabilities is important. Risk mod- els such as the Common Vulnerability Scoring System (CVSS), Open Web Application Security Project (OWASP) and Common Weakness Scoring System (CWSS) have been used to qualitatively assess the security risk presented by a vulnerability. CVSS metrics are the de facto standard and its metrics need to be independently evaluated. In this dissertation, we propose using a quantitative approach that uses an actual data, mathematical and statistical modeling, data analysis, and measurement. We have introduced a novel vulnerability discovery model, Folded model, that estimates the risk of vulnerability discovery based on the number of residual vulnerabilities in a given software. In addition to estimating the risk of vulnerabilities discovery of a whole system, this dissertation has furthermore introduced a novel metrics termed time to vulnerability discovery to assess the risk of an individual vulnerability discovery. We also have proposed a novel vulnerability exploitability risk measure termed Structural Severity. It is based on software properties, namely attack entry points, vulnerability location, the presence of the dangerous system calls, and reachability analysis. In addition to measurement, this dissertation has also proposed predicting vulnerability exploitability risk using internal software metrics. We have also proposed two approaches for evaluating CVSS Base metrics. Using the availability of exploits, we first have evaluated the performance of the CVSS Exploitability factor and have compared its performance to Microsoft (MS) rating system. The results showed that exploitability metrics of CVSS and MS have a high false positive rate. This finding has motivated us to conduct further investigation. To that end, we have introduced vulnerability reward programs (VRPs) as a novel ground truth to evaluate the CVSS Base scores. The results show that the notable lack of exploits for high severity vulnerabilities may be the result of prioritized fixing of vulnerabilities
Sensory Motor Remapping of Space in Human-Machine Interfaces
Studies of adaptation to patterns of deterministic forces have revealed the ability of the motor control system to form and use predictive representations of the environment. These studies have also pointed out that adaptation to novel dynamics is aimed at preserving the trajectories of a controlled endpoint, either the hand of a subject or a transported object. We review some of these experiments and present more recent studies aimed at understanding how the motor system forms representations of the physical space in which actions take place. An extensive line of investigations in visual information processing has dealt with the issue of how the Euclidean properties of space are recovered from visual signals that do not appear to possess these properties. The same question is addressed here in the context of motor behavior and motor learning by observing how people remap hand gestures and body motions that control the state of an external device. We present some theoretical considerations and experimental evidence about the ability of the nervous system to create novel patterns of coordination that are consistent with the representation of extrapersonal space. We also discuss the perspective of endowing human–machine interfaces with learning algorithms that, combined with human learning, may facilitate the control of powered wheelchairs and other assistive devices
Development of flat sheet ultrafiltration membrane for heavy metals removal from automobile industrial wastewater / Taha Ali A. Ben Mussa
This thesis is concerned with the production of new flat sheet ultrafiltration (FSUF) membrane. This process includes formulation, fabrication, and characterization of the developed FSUF. Spiral wound module was used in the membrane system for automobile wastewater treatment. About 6,700 lit/vehicle is the average rate of wastewater generation of the automobile industry. In Malaysia, the current treatment process of the automobile industry is activated sludge process (ASP) which shows poor efficiency for heavy metal, COD and BOD5 removal and also, ASP requires high electricity consumption that increases the treatment cost. The objectives of this study are to create a new formula for FSUF membrane to minimize the concentration of heavy metals, COD and BOD5 to the allowable limits. The research work was divided into five phases which include wastewater sampling, membrane development, membrane characterization, module fabrication and wastewater treatment system fabrication. The first phase deals with the characterization of the automobile effluent such as Iron, Chromium, Zinc, Copper and Lead as well as pH, COD, and BOD5. The initial Proton effluent showed that pH and Zn comply with EQA 2009 standards A and B. In addition, COD, BOD5, Fe, Cr and SS do not comply with EQA 2009 standards A and B. Pb and Cu comply with EQA 2009 standard B but not with standard A.The second phase deals with the development, design and fabrication of flat sheet ultrafiltration membrane, 18 membranes formulas were created in stages I & II using titration and % composition processes. From stages I and II, 4 membrane formulas (M1- M4) and 14 membranes (M5- M18) were obtained respectively. In stage II, polymer (PSF) and additive (PVP) concentration are varied in 14 membranes (M5 - M18) and the concentration of the solvent (DMAc) is constant. The third phase is membrane characterization of 18 membranes to get the best performance from stages I and II. Based on flux rate, salt rejection, Spectrophotometer Electron Microscopy (SEM), Fourier Transform Infrared Spectroscopy (FTIR) and molecular weight cut off (MWCO), one of the developed FSUF membranes was selected to be incorporated into the membrane treatment system. It shows increasing the polymer concentration in the solution causes increasing the thickness of skin and decrease the porosity of the membrane surface. The characterization showed that membranes M2 and M8 are the best membrane performance in stages I and II respectively. The flux rate of for membranes M2 and M8 are 52235 l/m2.hr and 66957 l/m2.hr respectively and that is why membrane M8 was selected to be run in the membrane system. In the fourth phase, there are four types of samples which are raw sample, raw sample after aeration process, raw sample after coagulation process and raw sample after aeration and coagulation processes in series. The last phase is to assess the efficiency of the developed FSUF membrane system. The best system performance in term of heavy metals removal is a system (C) which contains screening, the coagulation process and the developed FSUF membrane. The results of pH, BOD5, COD, (Fe), (Cr), (Zn), (Pb), (Cu), SS and Turbidity after using the membrane system (C) are 6.28, 14.3 mg/l, 24 mg/l, 0.037 mg/l, 0.036 mg/l, 0.08 mg/l, 0.071 mg/l, 0.065 mg/l, 49 mg/l and 16.6 NTU respectively. In conclusion, after using membrane system (C) in the treatment unit in Proton factory, the effluent can be discharged based on EQA 2009 standards A and B
Model Based Test Generation and Optimization
Abstract
Model Based Test Generation and Optimization
Mohamed Mussa A. Mussa, Ph.D.
Concordia University, 2015
Software testing is an essential activity in the software engineering process. It is used to enhance the quality of the software products throughout the software development process. It inspects different aspects of the software quality such as correctness, performance and usability. Furthermore, software testing consumes about 50% of the software development efforts. Software products go through several testing levels. The main ones are unit-level testing, component-level testing, integration-level testing, system-level testing and acceptance-level testing. Each testing level involves a sequence of tasks such as planning, modeling, execution and evaluation.
Plenty of systematic test generation approaches have been developed using different languages and notations. The majority of these approaches target a specific testing-level. However, only little effort has been directed toward systematic transition among testing-levels. Considering the incompatibility between these approaches, tailored compatibility-tools are required between the testing levels. Furthermore, several test models are usually generated to evaluate the implementation at each testing level. Unfortunately, there is redundancy among these models. Efficient reuse of these test models represents a significant challenge. On the other hand, the growing attention to the model driven methodologies bonds the development and the testing activities. However, research is still required to link the testing levels.
In this PhD thesis, we propose a model based testing framework that enables reusability and collaboration across the testing levels. In this framework, we propose test generation and test optimization approaches that at each level consider artifacts generated in preceding testing levels. More precisely, we propose an approach for the generation of integration test models starting from component test models, and another approach for the optimization of the acceptance test model using the integration test models. To conduct our research in rigorous settings, we base our framework on standard notations that are widely adopted for software development and testing, namely Unified Modeling Language (UML). In our first approach, component test cases are examined to locate and select the ones that include an interaction among the integrated components. The selected test cases are merged to generate integration test cases, which tackles the theoretical research issue of merging test cases. Furthermore, the generated test cases are mapped against each other to remove potential redundancies. For the second approach, acceptance test optimization, integration test models are compared to the acceptance test model in order to remove test cases that have already been exercised during the integration-level testing. However, not all integration test cases are suitable for the comparison. Integration test cases have to be examined to ensure that they do not include test stubs for system components.
We have developed two approaches and implemented the corresponding prototypes in order to demonstrate the effectiveness of our work. The first prototype implements the integration test generation approach. It accepts component test models and generates integration test models. The second prototype implements the acceptance test optimization approach. It accepts integration test models along with the acceptance test model and generates an optimized acceptance test model
- …