30 research outputs found
A historical GIS for England and Wales: a framework for reconstructing past geographies and analysing long-term change
PhDThis thesis describes the creation and possible uses of a Geographical Information
System that contains the changing boundaries of the major administrative units of
England and Wales from 1840 to 1974. For over 150 years the census, the General
Register Office, and others have used these units to publish a wealth of data
concerning the population of the country. The key issue addressed by the thesis is that
changes in the administrative geography have hampered much research on long-term
change in society that could have been done using these sources. The goal of the
thesis is the creation of framework to allow the analysis of long-term socio-economic
change that makes maximum use of the available data.
This involves not only making use of the data's attribute (statistical) component,
but also their spatial and temporal components. In order to do this, the thesis provides
solutions to two key problems: the first is how to build a GIS containing
administrative units that incorporates an accurate record of their changing boundaries
and can be linked to statistical data in a flexible manner. The second is how to remove
the impact of boundary changes when comparing datasets published at different dates.
This is done by devising a methodology for interpolating data from the administrative
units they were published using, onto a single target geography. An evaluation of the
accuracy of this interpolation is performed and examples are given of how this type of
research could be conducted. Taken together, these will release information locked up
within historical socio-economic statistics by allowing space to be explicitly
incorporated into any explorations of the data. This, in turn, allows research to explore
the past with increased levels of both spatial and attribute data for longer time periods
Recommended from our members
The role of error factors in teaching
This thesis describes a wide-ranging enquiry into the nature, identification and treatment of pupil error. It takes, as its main point of departure, the work of Harlow on the role of 'error factors' in learning and it was undertaken in the belief that educators could considerably enhance the effectiveness of their teaching by making more insightful appraisals of the learning obstacles, (or error factors) of their pupils. Harlow first introduced the concept of error factors following his research into 'learning set' formation. Lewis and Pask subsequently incorporated Harlow's ideas in their own work, 'and laid particular stress on the significance of error factors which give rise to whole classes of errors.
The concept of error factors and the role of error factors in teaching, is examined in this thesis in both psychological and philosophical terms, because the two approaches are mutually illuminating. Despite scant literature in this sphere the entire topic is opened up to systematic examination. Hypotheses are presented concerning the role and nature of error factors, and novel strategies are proposed for treating them. A special diagnostic framework for error factors has been formulated, an. 1 its effectiveness investigated in a series of Case Studies and pilot experiments, culminating in a major experimental investigation concerning the overall role of error factors in teaching. Strategies for error factor prediction and prevention are presented and examined, including the use of algorithms. Major consideration is given to the exploitation of error factors as powerful tools to enhance learning. There is a detailed theoretical discussion at each stage, so that the full significance of the findings can be assessed. Inferences concerning the role of error factors in teaching are examined in conjunction with methodological and other implications arising from the experimental findings. A number of weaknesses of current teaching strategies are identified, and the thesis concludes with various speculations (e. g. to do with the use of computer assisted instruction in this context) which have been prompted by this investigation
A Graph-Based Approach for the Summarization of Scientific Articles
Automatic text summarization is one of the eminent applications in the field of
Natural Language Processing. Text summarization is the process of generating
a gist from text documents. The task is to produce a summary which contains
important, diverse and coherent information, i.e., a summary should be self-contained.
The approaches for text summarization are conventionally extractive.
The extractive approaches select a subset of sentences from an input document
for a summary. In this thesis, we introduce a novel graph-based extractive summarization
approach.
With the progressive advancement of research in the various fields of science,
the summarization of scientific articles has become an essential requirement for
researchers. This is our prime motivation in selecting scientific articles as our
dataset. This newly formed dataset contains scientific articles from the PLOS
Medicine journal, which is a high impact journal in the field of biomedicine.
The summarization of scientific articles is a single-document summarization task.
It is a complex task due to various reasons, one of it being, the important information
in the scientific article is scattered all over it and another reason being, scientific
articles contain numerous redundant information. In our approach, we deal
with the three important factors of summarization: importance, non-redundancy
and coherence. To deal with these factors, we use graphs as they solve data sparsity
problems and are computationally less complex.
We employ bipartite graphical representation for the summarization task, exclusively.
We represent input documents through a bipartite graph that consists of
sentence nodes and entity nodes. This bipartite graph representation contains entity
transition information which is beneficial for selecting the relevant sentences
for a summary. We use a graph-based ranking algorithm to rank the sentences in
a document. The ranks are considered as relevance scores of the sentences which
are further used in our approach.
Scientific articles contain reasonable amount of redundant information, for example,
Introduction and Methodology sections contain similar information regarding
the motivation and approach. In our approach, we ensure that the summary contains
sentences which are non-redundant.
Though the summary should contain important and non-redundant information of
the input document, its sentences should be connected to one another such that
it becomes coherent, understandable and simple to read. If we do not ensure
that a summary is coherent, its sentences may not be properly connected. This
leads to an obscure summary. Until now, only few summarization approaches
take care of coherence. In our approach, we take care of coherence in two different
ways: by using the graph measure and by using the structural information. We
employ outdegree as the graph measure and coherence patterns for the structural
information, in our approach.
We use integer programming as an optimization technique, to select the best subset
of sentences for a summary. The sentences are selected on the basis of relevance,
diversity and coherence measure. The computation of these measures is
tightly integrated and taken care of simultaneously.
We use human judgements to evaluate coherence of summaries. We compare
ROUGE scores and human judgements of different systems on the PLOS Medicine
dataset. Our approach performs considerably better than other systems on this
dataset. Also, we apply our approach on the standard DUC 2002 dataset to compare
the results with the recent state-of-the-art systems. The results show that our
graph-based approach outperforms other systems on DUC 2002. In conclusion,
our approach is robust, i.e., it works on both scientific and news articles. Our
approach has the further advantage of being semi-supervised
The development of a novel agent based long term domestic energy stock model
This research has developed a novel long term domestic energy stock model of owneroccupied dwellings in England. Its primary purpose is to aid policy makers in determining appropriate policy measures to achieve CO2 emissions reductions in the housing sector.
Current modelling techniques can provide a highly disaggregated technology rich environment, but they do not consider the behaviour required for technological changes to the dwelling stock. Energy efficiency improvements will only occur in the owner-occupied sector of the housing market when owners decide to carry out such improvements. Therefore, a stock model that can simulate this decision making process will be of more use for policy makers in predicting the impact of different measures designed to encourage uptake of suitable technologies. Agent based modelling has been proposed as a solution to allow the inclusion of individual household decision making into a long term domestic stock model. The agents in the model represent households and have a simple additive weighting decision making algorithm based on discrete choice survey data from the Energy Saving Trust and Element Energy. The model has then been calibrated against historic technology diffusion data. Sixteen scenarios have been developed and tested in the model. The initial Business as Usual scenarios indicate that current policies are likely to fall well short of the 2050 80% emissions reduction target, although subsequent scenarios indicate that the target is achievable. The results also indicate that care is required when setting subsidy levels when competing technologies are available, as there is the potential to suppress the diffusion of technologies that offer greater potential savings. The developed model can now be used by policy makers in testing further scenarios, and this novel approach can be applied both regionally and in other countries, subject to the collection of suitable input data
Second CLIPS Conference Proceedings, volume 2
Papers presented at the 2nd C Language Integrated Production System (CLIPS) Conference held at the Lyndon B. Johnson Space Center (JSC) on 23-25 September 1991 are documented in these proceedings. CLIPS is an expert system tool developed by the Software Technology Branch at NASA JSC and is used at over 4000 sites by government, industry, and business. During the three days of the conference, over 40 papers were presented by experts from NASA, Department of Defense, other government agencies, universities, and industry