9,672 research outputs found
Gate-tunable bandgap in bilayer graphene
The tight-binding model of bilayer graphene is used to find the gap between
the conduction and valence bands, as a function of both the gate voltage and as
the doping by donors or acceptors. The total Hartree energy is minimized and
the equation for the gap is obtained. This equation for the ratio of the gap to
the chemical potential is determined only by the screening constant. Thus the
gap is strictly proportional to the gate voltage or the carrier concentration
in the absence of donors or acceptors. In the opposite case, where the donors
or acceptors are present, the gap demonstrates the asymmetrical behavior on the
electron and hole sides of the gate bias. A comparison with experimental data
obtained by Kuzmenko et al demonstrates the good agreement.Comment: 6 pages, 5 figure
Active financial analysis: stimulating engagement using Bloomberg for introductory finance students
There is increasing interest in the adoption of real-world interactive and participative learning techniques within economics and finance teaching through the use of trading room software. Previous research suggests that the integration of trading room software can improve knowledge development and performance. However, the time constraints of providing software training and requirements for foundation knowledge of basic maths and economics has restricted the adoption of trading room software to advanced courses. This paper outlines how the Bloomberg Professional Software was used in an introductory finance course and analyses student engagement, learning and attainment using feedback and performance data. We find that students valued the novelty of Bloomberg as part of a mix of different learning activities which facilitated the practical application of theory. Results also indicate that the alignment of teaching, learning and assessment promotes deeper engagement, and is associated with higher attainment. We demonstrate that trading room software can be effectively used in introductory courses to enhance the student experience and deepen understanding
Recommended from our members
A Common Data Model for Meta-Data in Interoperable Environments
A Common Data Model is a unifying structure used to allow heterogeneous environments to interoperate. An Object Oriented common model is presented in this paper, which provides this unifying structure for a Meta-Data Repository Visualisation Tool. The creation of this common model from the Meta-Data held in component databases is described. The role this common model has in interoperable environments is discussed, and the physical architecture created from the examination of the Meta-Data in the Repository common model is described
Asynchronous sampling for decentralized periodic event-triggered control
Decentralized periodic event-triggered control(DPETC) strategies are an attractive solution for wireless cyber-physical systems where resources such as network bandwidthand sensor power are scarce. This is because these strategieshave the advantage of preventing unnecessary data transmis-sions and therefore reduce bandwidth and energy requirements,however the sensor sampling regime remains synchronous.Typically the action of sampling leads almost immediately toa transmission on an event being detected. If the sampling issynchronous, multiple transmission requests may be raised atthe same time which further leads to bursty traffic patterns.Bursty traffic patterns are critical to the DPETC systemsperformance as the probability of collisions and the amount ofrequested bandwidth resources become high ultimately causingdelays. In this paper, we propose an asynchronous samplingscheme for DPETC. The scheme ensures that at each samplingtime, no more than one transmission request can be generatedwhich prevents the occurrence of network traffic collision.At the same time, for the DPETC system with asynchronoussampling a pre-designed global exponential stability andL2-gain performance can still be guaranteed. We illustrate theeffectiveness of the approach through a numerical example
Threat modeling for communication security of IoT-enabled digital logistics
The modernization of logistics through the use of Wireless Sensor Network (WSN) Internet of Things (IoT) devices promises great efficiencies. Sensor devices can provide real-time or near real-time condition monitoring and location tracking of assets during the shipping process, helping to detect delays, prevent loss, and stop fraud. However, the integration of low-cost WSN/IoT systems into a pre-existing industry should first consider security within the context of the application environment. In the case of logistics, the sensors are mobile, unreachable during the deployment, and accessible in potentially uncontrolled environments. The risks to the sensors include physical damage, either malicious/intentional or unintentional due to accident or the environment, or physical attack on a sensor, or remote communication attack. The easiest attack against any sensor is against its communication. The use of IoT sensors for logistics involves the deployment conditions of mobility, inaccesibility, and uncontrolled environments. Any threat analysis needs to take these factors into consideration. This paper presents a threat model focused on an IoT-enabled asset tracking/monitoring system for smart logistics. A review of the current literature shows that no current IoT threat model highlights logistics-specific IoT security threats for the shipping of critical assets. A general tracking/monitoring system architecture is presented that describes the roles of the components. A logistics-specific threat model that considers the operational challenges of sensors used in logistics, both malicious and non-malicious threats, is then given. The threat model categorizes each threat and suggests a potential countermeasure
Recommended from our members
Parallel methods for the update of partitioned inverted files
Purpose ā An issue which tends to be ignored in information retrieval is the issue of updating inverted files. This is largely because inverted files were devised to provide fast query service, and much work has been done with the emphasis strongly on queries. In this paper we study the effect of using parallel methods for the update of inverted files in order to reduce costs, by looking at two types of partitioning for inverted files: document identifier and term identifier.
Design/methodology/approach ā Raw update service and update with query service are studied with these partitioning schemes using an incremental update strategy. We use standard measures used in parallel computing such as speedup to examine the computing results and also the costs of reorganising indexes while servicing transactions.
Findings ā Empirical results show that for both transaction processing and index reorganisation the document identifier method is superior. However, there is evidence that the term identifier partitioning method could be useful in a concurrent transaction processing context.
Practical implications ā There is an increasing need to service updates which is now becoming a requirement of inverted files (for dynamic collections such as the Web), demonstrating that a shift in requirements of inverted file maintenance is needed from the past.
Originality/value ā The paper is of value to database administrators who manage large-scale and dynamic text collections, and who need to use parallel computing to implement their text retrieval services
Recommended from our members
Parallel methods for the generation of partitioned inverted files
Purpose
ā The generation of inverted indexes is one of the most computationally intensive activities for information retrieval systems: indexing large multiāgigabyte text databases can take many hours or even days to complete. We examine the generation of partitioned inverted files in order to speed up the process of indexing. Two types of index partitions are investigated: TermId and DocId.
Design/methodology/approach
ā We use standard measures used in parallel computing such as speedup and efficiency to examine the computing results and also the space costs of our trial indexing experiments.
Findings
ā The results from runs on both partitioning methods are compared and contrasted, concluding that DocId is the more efficient method.
Practical implications
ā The practical implications are that the DocId partitioning method would in most circumstances be used for distributing inverted file data in a parallel computer, particularly if indexing speed is the primary consideration.
Originality/value
ā The paper is of value to database administrators who manage largeāscale text collections, and who need to use parallel computing to implement their text retrieval services
Recommended from our members
PLIERS at VLC2
This paper describes experiments done on the VLC2 collection at TREC-7. Methods used for indexing text is described together with the results: this includes the official collections BASE1, plus some larger unofficial collections named BASE2 and BASE4. Search times on these collections are described and discussed with a particular emphasis on scaleup: for both weighted term search and passage retrieval. The various configurations for experiments are described
- ā¦