6 research outputs found
Graph Theoretical Analysis of local ultraluminous infrared galaxies and quasars
We present a methodological framework for studying galaxy evolution by
utilizing Graph Theory and network analysis tools. We study the evolutionary
processes of local ultraluminous infrared galaxies (ULIRGs) and quasars and the
underlying physical processes, such as star formation and active galactic
nucleus (AGN) activity, through the application of Graph Theoretical analysis
tools. We extract, process and analyse mid-infrared spectra of local (z < 0.4)
ULIRGs and quasars between 5-38 microns through internally developed Python
routines, in order to generate similarity graphs, with the nodes representing
ULIRGs being grouped together based on the similarity of their spectra.
Additionally, we extract and compare physical features from the mid-IR spectra,
such as the polycyclic aromatic hydrocarbons (PAHs) emission and silicate depth
absorption features, as indicators of the presence of star-forming regions and
obscuring dust, in order to understand the underlying physical mechanisms of
each evolutionary stage of ULIRGs. Our analysis identifies five groups of local
ULIRGs based on their mid-IR spectra, which is quite consistent with the well
established fork classification diagram by providing a higher level
classification. We demonstrate how graph clustering algorithms and network
analysis tools can be utilized as unsupervised learning techniques for
revealing direct or indirect relations between various galaxy properties and
evolutionary stages, which provides an alternative methodology to previous
works for classification in galaxy evolution. Additionally, our methodology
compares the output of several graph clustering algorithms in order to
demonstrate the best-performing Graph Theoretical tools for studying galaxy
evolution.Comment: Accepted for publication in Astronomy and Computin
First activity and interactions in thalamus and cortex using raw single-trial EEG and MEG elicited by somatosensory stimulation
Introduction: One of the primary motivations for studying the human brain is to comprehend how external sensory input is processed and ultimately perceived by the brain. A good understanding of these processes can promote the identification of biomarkers for the diagnosis of various neurological disorders; it can also provide ways of evaluating therapeutic techniques. In this work, we seek the minimal requirements for identifying key stages of activity in the brain elicited by median nerve stimulation.Methods: We have used a priori knowledge and applied a simple, linear, spatial filter on the electroencephalography and magnetoencephalography signals to identify the early responses in the thalamus and cortex evoked by short electrical stimulation of the median nerve at the wrist. The spatial filter is defined first from the average EEG and MEG signals and then refined using consistency selection rules across ST. The refined spatial filter is then applied to extract the timecourses of each ST in each targeted generator. These ST timecourses are studied through clustering to quantify the ST variability. The nature of ST connectivity between thalamic and cortical generators is then studied within each identified cluster using linear and non-linear algorithms with time delays to extract linked and directional activities. A novel combination of linear and non-linear methods provides in addition discrimination of influences as excitatory or inhibitory.Results: Our method identifies two key aspects of the evoked response. Firstly, the early onset of activity in the thalamus and the somatosensory cortex, known as the P14 and P20 in EEG and the second M20 for MEG. Secondly, good estimates are obtained for the early timecourse of activity from these two areas. The results confirm the existence of variability in ST brain activations and reveal distinct and novel patterns of connectivity in different clusters.Discussion: It has been demonstrated that we can extract new insights into stimulus processing without the use of computationally costly source reconstruction techniques which require assumptions and detailed modeling of the brain. Our methodology, thanks to its simplicity and minimal computational requirements, has the potential for real-time applications such as in neurofeedback systems and brain-computer interfaces
The Price of Defense
We consider a game on a graph G= ⟨ V, E⟩ with two confronting classes of randomized players: νattackers, who choose vertices and seek to minimize the probability of getting caught, and a single defender, who chooses edges and seeks to maximize the expected number of attackers it catches. In a Nash equilibrium, no player has an incentive to unilaterally deviate from her randomized strategy. The Price of Defense is the worst-case ratio, over all Nash equilibria, of ν over the expected utility of the defender at a Nash equilibrium. We orchestrate a strong interplay of arguments from Game Theory and Graph Theory to obtain both general and specific results in the considered setting: (1) Via a reduction to a Two-Players, Constant-Sum game, we observe that an arbitrary Nash equilibrium is computable in polynomial time. Further, we prove a general lower bound of |V|2 on the Price of Defense. We derive a characterization of graphs with a Nash equilibrium attaining this lower bound, which reveals a promising connection to Fractional Graph Theory; thereby, it implies an efficient recognition algorithm for such Defense-Optimal graphs. (2) We study some specific classes of Nash equilibria, both for their computational complexity and for their incurred Price of Defense. The classes are defined by imposing structure on the players’ randomized strategies: either graph-theoretic structure on the supports, or symmetry and uniformity structure on the probabilities. We develop novel graph-theoretic techniques to derive trade-offs between computational complexity and the Price of Defense for these classes. Some of the techniques touch upon classical milestones of Graph Theory; for example, we derive the first game-theoretic characterization of König-Egerváry graphs as graphs admitting a Matching Nash equilibrium
Project risk management using Event Calculus
Risks are unavoidable in systems engineering projects due to emerging changes during the lifetime of projects. Changes in activities and peoples' roles are usually not free from conflicts, which in some cases if not dealt adequately, increase the project failure risks and could bring the whole project to a standstill. Herein, we present a method that examines the impact of change to project's duration which constitutes one of the most critical risks in project management. The method proposed utilizes two main criteria namely, the degree of dependency among activities and actors, and the temporal costs associated with the change. The proposed methodology is realised in Event Calculus (EC) and elaborated with an example
Driver behaviour analysis through simulation
Human error is one of the principal influencing factors that lead to road accidents, and is attributed to increased mental workload induced by distractions such as advertisements and in-vehicle music. Workload, however, is characterized by intrinsic properties and hence difficult to be observed and quantified. Phenotype behaviours, such as lateral deviations, speed and headway, act as good indicators of driver workload and driving style. Driving simulators emerged as a promising technology for the analysis of driving conditions and road users' behaviour in an attempt to tackle the problem of road accidents. However, the cost of designing or owning a simulator to conduct a safety analysis is prohibitive for many government agencies. The work presented herein demonstrates the design and development of a driving simulator, using a 3D game engine. The simulator was employed to analyze the driving behaviours of local road users for a chosen black spot in Nicosia-Cyprus. Data collected from the experiments were analyzed and preliminary results are presented along with conclusions. © 2013 IEEE
Cooperative Games Among Densely Deployed WLAN Access Points
The high popularity of Wi-Fi technology for wireless access has led to a common problem of densely deployed access points (APs) in residential or commercial buildings, competing to use the same or overlapping frequency channels and causing degradation to the user experience due to excessive interference. This degradation is partly caused by the restriction where each client device is allowed to be served only by one of a very limited set of APs (e.g., belonging to the same residential unit), even if it is within the range of (or even has a better signal quality to) many other APs. The current chapter proposes a cooperative strategy to mitigate the interference and enhance the quality of service in dense wireless deployments by having neighboring APs agree to take turns (e.g., in round-robin fashion) to serve each other’s clients. We present and analyze a cooperative game-theoretic model of the incentives involved in such cooperation and identify the conditions under which cooperation would be beneficial for the participating APs