10,765 research outputs found
The impact and penetration of location-based services
Since the invention of digital technology, its development has followed an entrenched path ofminiaturisation and decentralisation with increasing focus on individual and niche applications. Computerhardware has moved from remote centres to desktop and hand held devices whilst being embedded invarious material infrastructures. Software has followed the same course. The entire process has convergedon a path where various analogue devices have become digital and are increasingly being embedded inmachines at the smallest scale. In a parallel but essential development, there has been a convergence ofcomputers with communications ensuring that the delivery and interaction mechanisms for computersoftware is now focused on networks of individuals, not simply through the desktop, but in mobilecontexts. Various inert media such as fixed television is becoming more flexible as computers and visualmedia are becoming one.With such massive convergence and miniaturisation, new software and new applications define the cuttingedge. As computers are being increasingly tailored to individual niches, then new digital services areemerging, many of which represent applications which hitherto did not exist or at best were rarely focusedon a mass market. Location based services form one such application and in this paper, we will bothspeculate on and make some initial predictions of the geographical extent to which such services willpenetrate different markets. We define such services in detail below but suffice it to say at this stage thatsuch functions involve the delivery of traditional services using digital media and telecommunications.High profile applications are now being focused on hand held devices, typically involving information onproduct location and entertainment but wider applications involve fixed installations on the desktop whereservices are delivered through traditional fixed infrastructure. Both wire and wireless applications definethis domain. The market for such services is inevitably volatile and unpredictable at this early stage but wewill attempt here to provide some rudimentary estimates of what might happen in the next five to tenyears.The ?network society? which has developed through this convergence, is, according to Castells (1989,2000) changing and re-structuring the material basis of society such that information has come todominate wealth creation in a way that information is both a raw material of production and an outcome ofproduction as a tradable commodity. This has been fuelled by the way technology has expanded followingMoore?s Law and by fundamental changes in the way telecommunications, finance, insurance, utilitiesand so on is being regulated. Location based services are becoming an integral part of this fabric and thesereflect yet another convergence between geographic information systems, global positioning systems, andsatellite remote sensing. The first geographical information system, CGIS, was developed as part of theCanada Land Inventory in 1965 and the acronym ?GIS? was introduced in 1970. 1971 saw the firstcommercial satellite, LANDSAT-1. The 1970s also saw prototypes of ISDN and mobile telephone and theintroduction of TCP/IP as the dominant network protocol. The 1980s saw the IBM XT (1982) and thebeginning of de-regulation in the US, Europe and Japan of key sectors within the economy. Finally in the 1990s, we saw the introduction of the World Wide Web and the ubiquitous pervasion of business andrecreation of networked PC?s, the Internet, mobile communications and the growing use of GPS forlocational positioning and GIS for the organisation and visualisation of spatial data. By the end of the 20thcentury, the number of mobile telephone users had reached 700 million worldwide. The increasingmobility of individuals, the anticipated availability of broadband communications for mobile devices andthe growing volumes of location specific information available in databases will inevitably lead to thedemand for services that will deliver location related information to individuals on the move. Suchlocation based services (LBS) although in a very early stage of development, are likely to play anincreasingly important part in the development of social structures and business in the coming decades.In this paper we begin by defining location based services within the context we have just sketched. Wethen develop a simple model of the market for location-based services developing the standard non-linearsaturation model of market penetration. We illustrate this for mobile devices, namely mobile phones in thefollowing sections and then we develop an analysis of different geographical regimes which arecharacterised by different growth rates and income levels worldwide. This leads us to speculate on theextent to which location based services are beginning to take off and penetrate the market. We concludewith scenarios for future growth through the analogy of GIS and mobile penetration
Tonic inhibition of accumbal spiny neurons by extrasynaptic 4 GABAA receptors modulates the actions of psychostimulants
Within the nucleus accumbens (NAc), synaptic GABAA receptors (GABAARs) mediate phasic inhibition of medium spiny neurons (MSNs) and influence behavioral responses to cocaine. We demonstrate that both dopamine D1- and D2-receptor-expressing MSNs (D-MSNs) additionally harbor extrasynaptic GABAARs incorporating α4, β, and δ subunits that mediate tonic inhibition, thereby influencing neuronal excitability. Both the selective δ-GABAAR agonist THIP and DS2, a selective positive allosteric modulator, greatly increased the tonic current of all MSNs from wild-type (WT), but not from δ−/− or α4−/− mice. Coupling dopamine and tonic inhibition, the acute activation of D1 receptors (by a selective agonist or indirectly by amphetamine) greatly enhanced tonic inhibition in D1-MSNs but not D2-MSNs. In contrast, prolonged D2 receptor activation modestly reduced the tonic conductance of D2-MSNs. Behaviorally, WT and constitutive α4−/− mice did not differ in their expression of cocaine-conditioned place preference (CPP). Importantly, however, mice with the α4 deletion specific to D1-expressing neurons (α4D1−/−) showed increased CPP. Furthermore, THIP administered systemically or directly into the NAc of WT, but not α4−/− or α4D1−/− mice, blocked cocaine enhancement of CPP. In comparison, α4D2−/− mice exhibited normal CPP, but no cocaine enhancement. In conclusion, dopamine modulation of GABAergic tonic inhibition of D1- and D2-MSNs provides an intrinsic mechanism to differentially affect their excitability in response to psychostimulants and thereby influence their ability to potentiate conditioned reward. Therefore, α4βδ GABAARs may represent a viable target for the development of novel therapeutics to better understand and influence addictive behaviors
Decoding neuronal ensembles in the human hippocampus
BACKGROUND: The hippocampus underpins our ability to navigate, to form and recollect memories, and to imagine future experiences. How activity across millions of hippocampal neurons supports these functions is a fundamental question in neuroscience, wherein the size, sparseness, and organization of the hippocampal neural code are debated. RESULTS: Here, by using multivariate pattern classification and high spatial resolution functional MRI, we decoded activity across the population of neurons in the human medial temporal lobe while participants navigated in a virtual reality environment. Remarkably, we could accurately predict the position of an individual within this environment solely from the pattern of activity in his hippocampus even when visual input and task were held constant. Moreover, we observed a dissociation between responses in the hippocampus and parahippocampal gyrus, suggesting that they play differing roles in navigation. CONCLUSIONS: These results show that highly abstracted representations of space are expressed in the human hippocampus. Furthermore, our findings have implications for understanding the hippocampal population code and suggest that, contrary to current consensus, neuronal ensembles representing place memories must be large and have an anisotropic structure
A comparison of four different imaging modalities - conventional, cross polarized, infra-red and ultra-violet in the assessment of childhood bruising
Background
It is standard practice to image concerning bruises in children. We aim to compare the clarity and measurements of bruises using cross polarized, infra-red (IR) and ultra-violet (UV) images to conventional images.
Methods
Children aged <11 years with incidental bruising were recruited. Demographics, skin and bruise details were recorded. Bruises were imaged by standard protocols in conventional, cross-polarized, IR and UV lights. Bruises were assessed in vivo for contrast, uniformity and diffuseness, and these characteristics were then compared across image modalities. Color images (conventional, cross polarized) were segmented and measured by ImageJ. Bruises of grey scale images (IR, UV) were measured by a ‘plug in’ of ImageJ. The maximum and minimum Feret's diameter, area and aspect ratio, were determined. Comparison of measurements across imaging modalities was conducted using Wilcoxon rank sum tests and modified Bland-Altman graphs. Significance was set at p < 0.05.
Results
Twenty five children had 39 bruises. Bruises that were of low contrast, i.e. difficult to distinguish from surrounding skin, were also more diffuse, and less uniformity in vivo. Low contrast bruises were best seen on conventional and cross-polarized images and less distinctive on IR and UV images. Of the 19 bruises visible in all modalities, the only significant difference was maximum and minimum Feret's diameters and area were smaller on IR compared to conventional images. Aspect ratios were not affected by the modality.
Conclusions
Conventional and cross-polarized imaging provides the most consistent bruise measurement, particularly in bruises that are not easily distinguished from surrounding skin visually
The Rising Light Curves of Type Ia Supernovae
We present an analysis of the early, rising light curves of 18 Type Ia
supernovae (SNe Ia) discovered by the Palomar Transient Factory (PTF) and the
La Silla-QUEST variability survey (LSQ). We fit these early data flux using a
simple power-law to determine the time of first
light , and hence the rise-time from first light to
peak luminosity, and the exponent of the power-law rise (). We find a mean
uncorrected rise time of days, with individual SN rise-times
ranging from to days. The exponent n shows significant
departures from the simple 'fireball model' of (or ) usually assumed in the literature. With a mean value of , our data also show significant diversity from event to event. This
deviation has implications for the distribution of 56Ni throughout the SN
ejecta, with a higher index suggesting a lesser degree of 56Ni mixing. The
range of n found also confirms that the 56Ni distribution is not standard
throughout the population of SNe Ia, in agreement with earlier work measuring
such abundances through spectral modelling. We also show that the duration of
the very early light curve, before the luminosity has reached half of its
maximal value, does not correlate with the light curve shape or stretch used to
standardise SNe Ia in cosmological applications. This has implications for the
cosmological fitting of SN Ia light curves.Comment: 19 pages, 19 figures, accepted for publication in MNRA
Statistical mixing and aggregation in Feller diffusion
We consider Feller mean-reverting square-root diffusion, which has been
applied to model a wide variety of processes with linearly state-dependent
diffusion, such as stochastic volatility and interest rates in finance, and
neuronal and populations dynamics in natural sciences. We focus on the
statistical mixing (or superstatistical) process in which the parameter related
to the mean value can fluctuate - a plausible mechanism for the emergence of
heavy-tailed distributions. We obtain analytical results for the associated
probability density function (both stationary and time dependent), its
correlation structure and aggregation properties. Our results are applied to
explain the statistics of stock traded volume at different aggregation scales.Comment: 16 pages, 3 figures. To be published in Journal of Statistical
Mechanics: Theory and Experimen
Using late-time optical and near-infrared spectra to constrain Type Ia supernova explosion properties
The late-time spectra of Type Ia supernovae (SNe Ia) are powerful probes of
the underlying physics of their explosions. We investigate the late-time
optical and near-infrared spectra of seven SNe Ia obtained at the VLT with
XShooter at 200 d after explosion. At these epochs, the inner Fe-rich ejecta
can be studied. We use a line-fitting analysis to determine the relative line
fluxes, velocity shifts, and line widths of prominent features contributing to
the spectra ([Fe II], [Ni II], and [Co III]). By focussing on [Fe II] and [Ni
II] emission lines in the ~7000-7500 \AA\ region of the spectrum, we find that
the ratio of stable [Ni II] to mainly radioactively-produced [Fe II] for most
SNe Ia in the sample is consistent with Chandrasekhar-mass delayed-detonation
explosion models, as well as sub-Chandrasekhar mass explosions that have
metallicity values above solar. The mean measured Ni/Fe abundance of our sample
is consistent with the solar value. The more highly ionised [Co III] emission
lines are found to be more centrally located in the ejecta and have broader
lines than the [Fe II] and [Ni II] features. Our analysis also strengthens
previous results that SNe Ia with higher Si II velocities at maximum light
preferentially display blueshifted [Fe II] 7155 \AA\ lines at late times. Our
combined results lead us to speculate that the majority of normal SN Ia
explosions produce ejecta distributions that deviate significantly from
spherical symmetry.Comment: 17 pages, 12 figure, accepted for publication in MNRA
Can biological quantum networks solve NP-hard problems?
There is a widespread view that the human brain is so complex that it cannot
be efficiently simulated by universal Turing machines. During the last decades
the question has therefore been raised whether we need to consider quantum
effects to explain the imagined cognitive power of a conscious mind.
This paper presents a personal view of several fields of philosophy and
computational neurobiology in an attempt to suggest a realistic picture of how
the brain might work as a basis for perception, consciousness and cognition.
The purpose is to be able to identify and evaluate instances where quantum
effects might play a significant role in cognitive processes.
Not surprisingly, the conclusion is that quantum-enhanced cognition and
intelligence are very unlikely to be found in biological brains. Quantum
effects may certainly influence the functionality of various components and
signalling pathways at the molecular level in the brain network, like ion
ports, synapses, sensors, and enzymes. This might evidently influence the
functionality of some nodes and perhaps even the overall intelligence of the
brain network, but hardly give it any dramatically enhanced functionality. So,
the conclusion is that biological quantum networks can only approximately solve
small instances of NP-hard problems.
On the other hand, artificial intelligence and machine learning implemented
in complex dynamical systems based on genuine quantum networks can certainly be
expected to show enhanced performance and quantum advantage compared with
classical networks. Nevertheless, even quantum networks can only be expected to
efficiently solve NP-hard problems approximately. In the end it is a question
of precision - Nature is approximate.Comment: 38 page
Recommended from our members
The effect of silica-containing binders on the titanium/face coat reaction
The interactions of CP-Ti and Ti-6Al-4V with investment molds containing alumina/silica and yttria/silica face coat systems were studied. Containerless melting in a vacuum was employed and small test samples were made by drop casting into the molds. The effects of the face coat material and mold preheat temperatures on the thickness of the alpha case on the castings were evaluated with microhardness and microprobe measurements. It was found that the thickness of the alpha case was the same, whether a yttria/silica or alumina/silica face coat was used, indicating that the silica binder reacted with the titanium. Hence, the use of expensive refractories, such as yttria, represents an unnecessary cost when combined with a silica binder. It was also found that the alloyed titanium castings had a thinner alpha case than those produced from CP-Ti, which suggests that the thickness of the alpha case depends on the crystal structure of the alloy during cooling from high temperatures. Furthermore, castings made in small yttria crucibles used as molds exhibited little or no alpha case
- …