2,339 research outputs found
Automated Response to Query System
SMS (Short Message Service) is now a hugely popular and a very powerful business communication
technology for mobile phones. In order to respond correctly to a free form factual question given a large collection
of texts, one needs to understand the question at a level that allows determining some of constraints the question
imposes on a possible answer. These constraints may include a semantic classification of the sought after
answer and may even suggest using different strategies when looking for and verifying a candidate answer. In
this paper we focus on various attempts to overcome the major contradiction: the technical limitations of the SMS
standard, and the huge number of found information for a possible answer
CLASSIFYING AND RESPONDING TO NETWORK INTRUSIONS
Intrusion detection systems (IDS) have been widely adopted within the IT community, as
passive monitoring tools that report security related problems to system administrators.
However, the increasing number and evolving complexity of attacks, along with the
growth and complexity of networking infrastructures, has led to overwhelming numbers of
IDS alerts, which allow significantly smaller timeframe for a human to respond. The need
for automated response is therefore very much evident. However, the adoption of such
approaches has been constrained by practical limitations and administrators' consequent
mistrust of systems' abilities to issue appropriate responses.
The thesis presents a thorough analysis of the problem of intrusions, and identifies false
alarms as the main obstacle to the adoption of automated response. A critical examination
of existing automated response systems is provided, along with a discussion of why a new
solution is needed. The thesis determines that, while the detection capabilities remain
imperfect, the problem of false alarms cannot be eliminated. Automated response
technology must take this into account, and instead focus upon avoiding the disruption of
legitimate users and services in such scenarios. The overall aim of the research has
therefore been to enhance the automated response process, by considering the context of an
attack, and investigate and evaluate a means of making intelligent response decisions.
The realisation of this objective has included the formulation of a response-oriented
taxonomy of intrusions, which is used as a basis to systematically study intrusions and
understand the threats detected by an IDS. From this foundation, a novel Flexible
Automated and Intelligent Responder (FAIR) architecture has been designed, as the basis
from which flexible and escalating levels of response are offered, according to the context
of an attack. The thesis describes the design and operation of the architecture, focusing
upon the contextual factors influencing the response process, and the way they are
measured and assessed to formulate response decisions. The architecture is underpinned by
the use of response policies which provide a means to reflect the changing needs and
characteristics of organisations.
The main concepts of the new architecture were validated via a proof-of-concept prototype
system. A series of test scenarios were used to demonstrate how the context of an attack
can influence the response decisions, and how the response policies can be customised and
used to enable intelligent decisions. This helped to prove that the concept of flexible
automated response is indeed viable, and that the research has provided a suitable
contribution to knowledge in this important domain
Use of automated response systems in the small sized class
An interactive demonstration of various teaching methods using Remote Response Systems (Automated Response Systems, Personal Response Systems, Clickers ) as they were applied to the small sized class (\u3c 20 students). Most of the research with clickers has been in large classes. The methods used (peer instruction, group discussion, and simple polling with contingency teaching) will be demonstrated and results of the initial study with a class of 14 students will be presented. Internet access for all participants (local computers, laptops, or phones) will be necessary for this activity
Automated Response Surface Methodology for Stochastic Optimization Models with Unknown Variance
Response Surface Methodology (RSM) is a tool that was introduced in the early 50´s by Box and Wilson (1951). It is a collection of mathematical and statistical techniques useful for the approximation and optimization of stochastic models. Applications of RSM can be found in e.g. chemical, engineering and clinical sciences. In this paper we are interested in finding the best settings for an automated RSM procedure when there is very little information about the stochastic objective function. We will present a framework of the RSM procedures for finding optimal solutions in the presence of noise. We emphasize the use of both stopping rules and restart procedures. Good stopping rules recognize when no further improvement is being made. Restarts are used to escape from non-optimal regions of the domain. We compare different versions of the RSM algorithms on a number of test functions, including a simulation model for cancer screening. The results show that co! nsiderable improvement is possible over the proposed settings in the existing literature
Evaluating automated longitudinal tumor measurements for glioblastoma response assessment.
Automated tumor segmentation tools for glioblastoma show promising performance. To apply these tools for automated response assessment, longitudinal segmentation, and tumor measurement, consistency is critical. This study aimed to determine whether BraTumIA and HD-GLIO are suited for this task. We evaluated two segmentation tools with respect to automated response assessment on the single-center retrospective LUMIERE dataset with 80 patients and a total of 502 post-operative time points. Volumetry and automated bi-dimensional measurements were compared with expert measurements following the Response Assessment in Neuro-Oncology (RANO) guidelines. The longitudinal trend agreement between the expert and methods was evaluated, and the RANO progression thresholds were tested against the expert-derived time-to-progression (TTP). The TTP and overall survival (OS) correlation was used to check the progression thresholds. We evaluated the automated detection and influence of non-measurable lesions. The tumor volume trend agreement calculated between segmentation volumes and the expert bi-dimensional measurements was high (HD-GLIO: 81.1%, BraTumIA: 79.7%). BraTumIA achieved the closest match to the expert TTP using the recommended RANO progression threshold. HD-GLIO-derived tumor volumes reached the highest correlation between TTP and OS (0.55). Both tools failed at an accurate lesion count across time. Manual false-positive removal and restricting to a maximum number of measurable lesions had no beneficial effect. Expert supervision and manual corrections are still necessary when applying the tested automated segmentation tools for automated response assessment. The longitudinal consistency of current segmentation tools needs further improvement. Validation of volumetric and bi-dimensional progression thresholds with multi-center studies is required to move toward volumetry-based response assessment
Recommended from our members
Modelling the impact of university ICT strategies on learning
This research explores the potential of certain Future Studies techniques (Barbieri Masini, 1994) to provide insight into the question of how developing countries might best exploit Information and Communication Technology (ICT) for higher education.
First, three case studies were examined: the African Virtual University (AVU), the Arab Open University (AOU) and the Syrian Virtual University (SVU). From these accounts, key variables related to the research question were identified, the selection of variables validated by comparison with D–Antoni (2003). Globalisation is seen as a key change driver. Secondly, a model of 'ICT Strategy' was developed, elaborating the well-known concept of distance education 'generations', building on the work of Nipper (1989) and subsequent authors. A model of 'Student Learning' was also developed, drawing on Conole et al. (2004). These models were then coordinated to generate possible scenarios for how ICT strategy might influence student learning, making assumptions about 'typical' usage. There is no presumption of deriving ineluctable scenarios from unproblematic antecedent models; the aim rather was to explore the limitations of the best models currently available as generators of broad-brush scenarios, to try to understand the ways in which such models could be improved.
One interpretation is that if institutions, under pressure for globalisation, adopted 2nd generation technologies alone, the impact on Student Learning would be neglect of Social aspects. Meanwhile, although a mix of generations could in principle provide coverage of the whole Individual-Social dimension, if institutions adopted 3rd technologies alone, the impact on Student Learning would be neglect of Individual aspects. This provides support for the warning by Clegg et al (2003) that uncritical acceptance of pressures to adopt new ICT for education, under the rhetoric of 'student-centred learning', can in fact turn out to have negative consequences for students. Moreover, it should not be assumed that a move to using 5th and 6th generation technologies exclusively necessarily represents a progression. If the AVU chose this strategy without high bandwidth for online video conferencing, the analysis suggests that its students would miss out on Social aspects.
Nevertheless, it is also possible that a move straight to the fourth and subsequent generations could, in principle, provide coverage of the Individual-Social dimension, without the need for face-to-face tutorials or unreliable postal systems that feature in earlier generations.
Four scenarios are discerned, distinguished by the balance between presentation of information and direct experience on the one hand, and the level of student autonomy on the other. None of the case study universities is yet clearly positioned in a single scenario.
Examination of the strength of the analysis suggests that although some testable hypotheses have been generated in relation to diverse pedagogical scenarios, a richer selection of variables, more sophisticated models, and more detailed institutional data would be of value.
References
Barbieri Masini, E. (1994) Why Futures Studies, Grey Seal, London.
Conole, G., Dyke, M., Oliver, M. & Seale, J.(2004). 'Mapping pedagogy and tools for effective learning design', Computers and Education, 43, 17-33.
D–Antoni, S. (Ed.) (2003) The Virtual University: Models and Messages, Lessons from Case Studies, UNESCO International Institute for Educational Planning
Nipper, S. (1989) 'Third generation distance learning and computer conferencing' in Mason, R. and Kaye, A. (Eds.) Mindweave: Communication, Computers and Distance Education, Oxford: Pergamon
Bayesian Learning Networks Approach to Cybercrime Detection
The growing dependence of modern society on telecommunication and information networks has become inevitable. The increase in the number of interconnected networks to the Internet has led to an increase in security threats and cybercrimes such as Distributed Denial of Service (DDoS) attacks. Any Internet based attack typically is prefaced by a reconnaissance probe process, which might take just a few minutes, hours, days, or even months before the attack takes place. In order to detect distributed network attacks as early as possible, an under research and development probabilistic approach, which is known by Bayesian networks has been proposed. This paper shows how probabilistically Bayesian network detects communication network attacks, allowing for generalization of Network Intrusion Detection Systems (NIDSs). Learning Agents which deploy Bayesian network approach are considered to be a promising and useful tool in determining suspicious early events of Internet threats and consequently relating them to the following occurring activities.Peer reviewe
Automated Response Surface Methodology for Stochastic Optimization Models with Unknown Variance
Response Surface Methodology (RSM) is a tool that was introduced in the early 50´s by Box and Wilson (1951). It is a collection of mathematical and statistical techniques useful for the approximation and optimization of stochastic models. Applications of RSM can be found in e.g. chemical, engineering and clinical sciences. In this paper we are interested in finding the best settings for an automated RSM procedure when there is very little information about the stochastic objective function. We will present a framework of the RSM procedures for finding optimal solutions in the presence of noise. We emphasize the use of both stopping rules and restart procedures. Good stopping rules recognize when no further improvement is being made. Restarts are used to escape from non-optimal regions of the domain. We compare different versions of the RSM algorithms on a number of test functions, including a simulation model for cancer screening. The results show that co! nsiderable improvement is possible over the proposed settings in the existing literature.response surface methodology;simulation optimization
- …