3,453 research outputs found
YSO jets in the Galactic Plane from UWISH2: I - MHO catalogue for Serpens and Aquila
Jets and outflows from Young Stellar Objects (YSOs) are important signposts
of currently ongoing star formation. In order to study these objects we are
conducting an unbiased survey along the Galactic Plane in the 1-0S(1) emission
line of molecular hydrogen at 2.122mu using the UK Infrared Telescope. In this
paper we are focusing on a 33 square degree sized region in Serpens and Aquila
(18deg < l < 30deg; -1.5deg < b < +1.5deg).
We trace 131 jets and outflows from YSOs, which results in a 15 fold increase
in the total number of known Molecular Hydrogen Outflows. Compared to this, the
total integrated 1-0S(1) flux of all objects just about doubles, since the
known objects occupy the bright end of the flux distribution. Our completeness
limit is 3*10^-18Wm^-2 with 70% of the objects having fluxes of less than
10^-17Wm^-2.
Generally, the flows are associated with Giant Molecular Cloud complexes and
have a scale height of 25-30pc with respect to the Galactic Plane. We are able
to assign potential source candidates to about half the objects. Typically, the
flows are clustered in groups of 3-5 objects, within a radius of 5pc. These
groups are separated on average by about half a degree, and 2/3rd of the entire
survey area is devoid of outflows. We find a large range of apparent outflow
lengths from 4arcsec to 130arcsec. If we assume a distance of 3kpc, only 10% of
all outflows are of parsec scale. There is a 2.6sigma over abundance of flow
position angles roughly perpendicular to the Galactic Plane.Comment: 13pages, 1table (Appendix B not included), 6figures, accepted for
publication by MNRAS, a version with higher resolution figures can be found
at http://astro.kent.ac.uk/~df
xPF: Packet Filtering for Low-Cost Network Monitoring
The ever-increasing complexity in network infrastructures is making critical the demand for network monitoring tools. While the majority of network operators rely on low-cost open-source tools based on commodity hardware and operating systems, the increasing link speeds and complexity of network monitoring applications have revealed inefficiencies in the existing software organization, which may prohibit the use of such tools in high-speed networks. Although several new architectures have been proposed to address these problems, they require significant effort in re-engineering the existing body of applications. We present an alternative approach that addresses the primary sources of inefficiency without significantly altering the software structure. Specifically, we enhance the computational model of the Berkeley packet filter (BPF) to move much of the processing associated with monitoring into the kernel, thereby removing the overhead associated with context switching between kernel and applications. The resulting packet filter, called xPF, allows new tools to be more efficiently implemented and existing tools to be easily optimized for high-speed networks. We present the design and implementation of xPF as well as several example applications that demonstrate the efficiency of our approach
Recommended from our members
MobileTrust: Secure Knowledge Integration in VANETs
Vehicular Ad hoc NETworks (VANET) are becoming popular due to the emergence of the Internet of Things and ambient intelligence applications. In such networks, secure resource sharing functionality is accomplished by incorporating trust schemes. Current solutions adopt peer-to-peer technologies that can cover the large operational area. However, these systems fail to capture some inherent properties of VANETs, such as fast and ephemeral interaction, making robust trust evaluation of crowdsourcing challenging. In this article, we propose MobileTrustāa hybrid trust-based system for secure resource sharing in VANETs. The proposal is a breakthrough in centralized trust computing that utilizes cloud and upcoming 5G technologies to provide robust trust establishment with global scalability. The ad hoc communication is energy-efficient and protects the system against threats that are not countered by the current settings. To evaluate its performance and effectiveness, MobileTrust is modelled in the SUMO simulator and tested on the traffic features of the small-size German city of Eichstatt. Similar schemes are implemented in the same platform to provide a fair comparison. Moreover, MobileTrust is deployed on a typical embedded system platform and applied on a real smart car installation for monitoring traffic and road-state parameters of an urban application. The proposed system is developed under the EU-founded THREAT-ARREST project, to provide security, privacy, and trust in an intelligent and energy-aware transportation scenario, bringing closer the vision of sustainable circular economy
Recommended from our members
Pattern-driven security, privacy, dependability and interoperability management of iot environments
Achieving Security, Privacy, Dependability and Interoperability (SPDI) is of paramount importance for the ubiquitous deployment and impact maximization of Internet of Things (IoT) applications. Nevertheless, said requirements are not only difficult to achieve at system initialization, but also hard to prove and maintain at run-time. This paper highlights an approach to tackling the above challenges, through the definition of pattern language and a framework that can guarantee SPDI in IoT orchestrations. By integrating pattern reasoning engines at the various layers of the IoT infrastructure, and a machine-processable representation of said pattern through Drools rules, the proposed framework can provide ways to fulfill SPDI requirements at design time, and also provide the means to guarantee those SPDI properties and manage the orchestrations accordingly. Moreover, an application example of the framework is presented in an Industrial IoT monitoring environment
Sampling-Based Query Re-Optimization
Despite of decades of work, query optimizers still make mistakes on
"difficult" queries because of bad cardinality estimates, often due to the
interaction of multiple predicates and correlations in the data. In this paper,
we propose a low-cost post-processing step that can take a plan produced by the
optimizer, detect when it is likely to have made such a mistake, and take steps
to fix it. Specifically, our solution is a sampling-based iterative procedure
that requires almost no changes to the original query optimizer or query
evaluation mechanism of the system. We show that this indeed imposes low
overhead and catches cases where three widely used optimizers (PostgreSQL and
two commercial systems) make large errors.Comment: This is the extended version of a paper with the same title and
authors that appears in the Proceedings of the ACM SIGMOD International
Conference on Management of Data (SIGMOD 2016
Reporting and interpretation of SF-36 outcomes in randomised trials: systematic review
Objective To determine how often health surveys and quality of life evaluations reach different conclusions from those of primary efficacy outcomes and whether discordant results make a difference in the interpretation of trial findings
Ultra-Low-Power Superconductor Logic
We have developed a new superconducting digital technology, Reciprocal
Quantum Logic, that uses AC power carried on a transmission line, which also
serves as a clock. Using simple experiments we have demonstrated zero static
power dissipation, thermally limited dynamic power dissipation, high clock
stability, high operating margins and low BER. These features indicate that the
technology is scalable to far more complex circuits at a significant level of
integration. On the system level, Reciprocal Quantum Logic combines the high
speed and low-power signal levels of Single-Flux- Quantum signals with the
design methodology of CMOS, including low static power dissipation, low latency
combinational logic, and efficient device count.Comment: 7 pages, 5 figure
Recommended from our members
Cyber insurance of information systems: Security and privacy cyber insurance contracts for ICT and helathcare organizations
Nowadays, more-and-more aspects of our daily activities are digitalized. Data and assets in the cyber-space, both for individuals and organizations, must be safeguarded. Thus, the insurance sector must face the challenge of digital transformation in the 5G era with the right set of tools. In this paper, we present CyberSure-an insurance framework for information systems. CyberSure investigates the interplay between certification, risk management, and insurance of cyber processes. It promotes continuous monitoring as the new building block for cyber insurance in order to overcome the current obstacles of identifying in real-time contractual violations by the insured party and receiving early warning notifications prior the violation. Lightweight monitoring modules capture the status of the operating components and send data to the CyberSure backend system which performs the core decision making. Therefore, an insured system is certified dynamically, with the risk and insurance perspectives being evaluated at runtime as the system operation evolves. As new data become available, the risk management and the insurance policies are adjusted and fine-tuned. When an incident occurs, the insurance company possesses adequate information to assess the situation fast, estimate accurately the level of a potential loss, and decrease the required period for compensating the insured customer. The framework is applied in the ICT and healthcare domains, assessing the system of medium-size organizations. GDPR implications are also considered with the overall setting being effective and scalable
- ā¦