Tuesday, 12 August 2014

X-ray bananas

This year's discoveries follow the well-known 5-stage Kübler-Ross pattern: 1) announcement, 2) excitement, 3) debunking, 4) confusion, 5) depression.  While BICEP is approaching the end of the cycle, the sterile neutrino dark matter signal reported earlier this year is now entering stage 3. This is thanks to yesterday's paper entitled Dark matter searches going bananas by Tesla Jeltena and Stefano Profumo (to my surprise, this is not the first banana in a physics paper's title).

In the previous episode, two independent analyses  using public data from XMM and Chandra satellites concluded the presence of an  anomalous 3.55 keV monochromatic emission from galactic clusters and Andromeda. One possible interpretation is a 7.1 keV sterile neutrino dark matter decaying to a photon and a standard neutrino. If the signal could be confirmed and conventional explanations (via known atomic emission lines) could be excluded, it would mean we are close to solving the dark matter puzzle.

It seems this is not gonna happen. The new paper makes two claims:

  1. Limits from x-ray observations of the Milky Way center exclude the sterile neutrino interpretation of the reported signal from galactic clusters. 
  2. In any case, there's no significant anomalous emission line from galactic clusters near 3.55 keV.       

Let's begin with the first claim. The authors analyze several days of XMM observations of the Milky Way center. They find that the observed spectrum can be very well fit by known plasma emission lines. In particular, all spectral features near 3.5 keV are accounted for if Potassium XVIII lines at 3.48 and 3.52 keV are included in the fit. Based on that agreement, they can derive strong bounds on the parameters of the sterile neutrino dark matter model: the mixing angle between the sterile and the standard neutrino should satisfy sin^2(2θ) ≤ 2*10^-11. This excludes the parameter space favored by the previous detection of the 3.55 keV line in  galactic clusters.  The conclusions are similar, and even somewhat stronger, as in the earlier analysis using Chandra data.

This is disappointing but not a disaster yet, as there are alternative dark matter models (e.g. axions converting to photons in the magnetic field of a galaxy) that do not predict observable emission lines from our galaxy. But there's one important corollary of the new analysis. It seems that the inferred strength of the Potassium XVIII lines compared to the strength of other atomic lines does not agree well with theoretical models of plasma emission. Such models were an important ingredient in the previous analyses that found the signal. In particular, the original 3.55 keV detection paper assumed upper limits on the strength of the Potassium XVIII line derived from the observed strength of the Sulfur XVI line. But the new findings suggest that systematic errors may have been underestimated.  Allowing for a higher flux of Potassium XVIII, and also including the 3.51 Chlorine XVII line (that was missed in the previous analyses), one can a obtain a good fit to the observed x-ray spectrum from galactic clusters, without introducing a dark matter emission line. Right... we suspected something was smelling bad here, and now we know it was chlorine... Finally, the new paper reanalyses the x-ray spectrum from Andromeda, but it disagrees with the previous findings:  there's a hint of the 3.53 keV anomalous emission line from Andromeda, but its significance is merely 1 sigma.

So, the putative dark matter signals are dropping like flies these days. We urgently need new ones to replenish my graph ;)

Note added: While finalizing this post I became aware of today's paper that, using the same data, DOES find a 3.55 keV line from the Milky Way center.  So we're already at stage 4... seems that the devil is in the details how you model the potassium lines (which, frankly speaking, is not reassuring).

Wednesday, 23 July 2014

Higgs Recap

On the occasion of summer conferences the LHC experiments dumped a large number of new Higgs results. Most of them have already been advertised on blogs, see e.g. here or here or here. In case you missed anything, here I summarize the most interesting updates of the last few weeks.

1. Mass measurements.
Both ATLAS and CMS recently presented improved measurements of the Higgs boson mass in the diphoton and 4-lepton final states. The errors shrink to 400 MeV in ATLAS and 300 MeV in CMS. The news is that Higgs has lost some weight (the boson, not Peter). A naive combination of the ATLAS and CMS results yields the central value 125.15 GeV. The profound consequence is that, for another year at least,  we will call it the 125 GeV particle, rather than the 125.5 GeV particle as before ;)

While the central values of the Higgs mass combinations quoted by ATLAS and CMS are very close, 125.36 vs 125.03 GeV, the individual inputs are still a bit apart from each other. Although the consistency of the ATLAS measurements in the  diphoton and 4-lepton channels has improved, these two independent mass determinations differ by 1.5 GeV, which corresponds to a 2 sigma tension. Furthermore, the central values of the Higgs mass quoted by ATLAS and CMS differ by 1.3 GeV in the diphoton channel and by 1.1 in the 4-lepton channel, which also amount to 2 sigmish discrepancies. This could be just bad luck, or maybe the systematic errors are slightly larger than the experimentalists think.

2. Diphoton rate update.
CMS finally released a new value of the Higgs signal strength in the diphoton channel.  This CMS measurement was a bit of a roller-coaster: initially they measured an excess, then with the full dataset they reported a small deficit. After more work and more calibration they settled to the value 1.14+0.26-0.23 relative to the standard model prediction, in perfect agreement with the standard model. Meanwhile ATLAS is also revising the signal strength in this channel towards the standard model value.  The number 1.29±0.30 quoted  on the occasion of the mass measurement is not yet the final one; there will soon be a dedicated signal strength measurement with, most likely, a slightly smaller error.  Nevertheless, we can safely announce that the celebrated Higgs diphoton excess is no more.

3. Off-shell Higgs.
Most of the LHC searches are concerned with an on-shell Higgs, that is when its 4-momentum squared is very close to its mass. This is where Higgs is most easily recognizable, since it can show as a bump in invariant mass distributions. However Higgs, like any quantum particle, can also appear as a virtual particle off-mass-shell and influence, in a subtler way, the cross section or differential distributions of various processes. One place where an off-shell Higgs may visible contribute is the pair production of on-shell Z bosons. In this case, the interference between gluon-gluon → Higgs → Z Z process and  the non-Higgs one-loop Standard Model contribution to gluon-gluon → Z Z process can influence the cross section in a non-negligible way.  At the beginning, these off-shell measurements were advertised as a model-independent Higgs width measurement, although now it is recognized the "model-independent" claim does not stand. Nevertheless, measuring the ratio of the off-shell and on-shell Higgs production provides qualitatively new information  about the Higgs couplings and, under some specific assumptions, can be interpreted an indirect constraint on the Higgs width. Now both ATLAS and CMS quote the constraints on the Higgs width at the level of 5 times the Standard Model value.  Currently, these results are not very useful in practice. Indeed, it would require a tremendous conspiracy to reconcile the current data with the Higgs width larger than 1.3 the standard model  one. But a new front has been opened, and one hopes for much more interesting results in the future.


4. Tensor structure of Higgs couplings.
Another front that is being opened as we speak is constraining higher order Higgs couplings with a different tensor structure. So far, we have been given the so-called spin/parity measurements. That is to say, the LHC experiments imagine a 125 GeV particle with a different spin and/or parity than the Higgs, and the couplings to matter consistent with that hypothesis. Than they test  whether this new particle or the standard model Higgs better describes the observed differential  distributions of Higgs decay products. This has some appeal to general public and nobel committees but little practical meaning. That's because the current data, especially the Higgs signal strength measured in multiple channels, clearly show that the Higgs is, in the first approximation, the standard model one. New physics, if exists, may only be a small perturbation on top of the standard model couplings. The relevant  question is how well we can constrain these perturbations. For example, possible couplings of the Higgs to the Z boson are

In the standard model only the first type of coupling is present in the Lagrangian, and all the a coefficients are zero. New heavy particles coupled to the Higgs and Z bosons could be indirectly detected by measuring non-zero a's, In particular, a3 violates the parity symmetry and could arise from mixing of the standard model Higgs with a pseudoscalar particle. The presence of non-zero a's would show up, for example,  as a modification of the lepton momentum distributions in the Higgs decay to 4 leptons. This was studied by CMS in this note. What they do is not perfect yet, and the results are presented in an unnecessarily complicated fashion. In any case it's a step in the right direction: as the analysis improves and more statistics is accumulated in the next runs these measurements will become an important probe of new physics.

5. Flavor violating decays.
In the standard model, the Higgs couplings conserve flavor, in both the quark and the lepton sectors. This is a consequence of the assumption that the theory is renormalizable and that only 1 Higgs field is present.  If either of these assumptions is violated, the Higgs boson may mediate transitions between different generations of matter. Earlier, ATLAS and CMS  searched for top quark decay to charm and Higgs. More recently, CMS turned to lepton flavor violation, searching for Higgs decays to τμ pairs. This decay cannot occur in the standard model, so the search is a clean null test. At the same time, the final state is relatively simple from the experimental point of view, thus this decay may be a sensitive probe of new physics. Amusingly, CMS sees a 2.5 sigma significant  excess corresponding to the h→τμ branching fraction of order 1%. So we can entertain a possibility that Higgs holds the key to new physics and flavor hierarchies, at least until ATLAS comes out with its own measurement.

Saturday, 19 July 2014

Weekend Plot: Prodigal CRESST

CRESST is one of the dark matter direct detection experiments seeing an excess which may be interpreted as a signal of a fairly light (order 10 GeV) dark matter particle.  Or it was... This week they posted a new paper reporting on new data collected last year with an upgraded detector. Farewell CRESST signal, welcome CRESST limits:
The new limits (red line) exclude most of the region of the parameter space favored by the previous CRESST excess: M1 and M2 in the plot.  Of course, these regions have never been taken at face value because they are excluded by orders of magnitude by the LUX, Xenon, and CDMS experiments. Nevertheless, the excess was pointing to similar dark matter mass as the signals reported DAMA, CoGeNT, and CDMS-Si (other color stains), which prompted many to speculate a common origin of all these anomalies. Now the excess is gone. Instead, CRESST emerges as an interesting player in the race toward the neutrino wall. Their target material - CaWO4 crystals - contains oxygen nuclei which, due their small masses, are well suited for detecting light dark matter. The kink in the limits curve near 5 GeV is the point below which dark-matter-induced recoil events would be dominated by scattering on oxygen. Currently, CRESST has world's best limits for dark matter masses between  2 and 3 GeV, beating DAMIC (not shown in the plot) and CDMSlite (dashed green line).

Sunday, 15 June 2014

Weekend Plot: BaBar vs Dark Force

BaBar was an experiment studying 10 GeV electron-positron collisions. The collider is long gone, but interesting results keep appearing from time to time.  Obviously, this is not a place to discover new heavy particles. However, due to the large luminosity and clean experimental environment,  BaBar is well equipped to look for light and very weakly coupled particles that can easily escape detection in bigger but dirtier machines like the LHC. Today's weekend plot is the new BaBar limits on dark photons:

Dark photon is a hypothetical spin-1 boson that couples to other particles with the strength proportional to their electric charges. Compared to the ordinary photon, the dark one is assumed to have a non-zero mass mA' and the coupling strength suppressed by the factor ε. If ε is small enough the dark photon can escape detection even if mA' is very small, in the MeV or GeV range. The model was conceived long ago, but in the previous decade it has gained wider popularity as the leading explanation of the PAMELA anomaly.  Now, as PAMELA is getting older, she is no longer considered a convincing evidence of new physics. But the dark photon model remains an important benchmark - a sort of spherical cow model for light hidden sectors. Indeed, in the simplest realization, the model is fully described by just two parameters: mA' and ε, which makes it easy to present and compare results of different searches.

In electron-positron collisions one can produce a dark photon in association with an ordinary photon, in analogy to the familiar process of e+e- annihilation into 2 photons. The dark photon then decays to a pair of electrons or muons (or heavier charged particles, if they are kinematically available). Thus, the signature is a spike in the e+e- or μ+μ- invariant mass spectrum of γl+l- events. BaBar performed this search to obtain world's best limits on dark photons in the mass range 30 MeV - 10 GeV, with the upper limit on ε in the 0.001 ballpark. This does not have direct consequences for the explanation of the  PAMELA anomaly, as the model works with a smaller ε too. On the other hand, the new results close in on the parameter space where the minimal dark photon model  can explain the muon magnetic moment anomaly (although one should be aware that one can reduce the tension with a trivial modification of the model, by allowing the dark photon to decay into the hidden sector).

So, no luck so far, we need to search further. What one should retain is that finding new heavy particles and finding new light weakly interacting particles seems equally probable at this point :)

Monday, 2 June 2014

Another one bites the dust...

...though it's not BICEP2 this time :) This is a long overdue update on the forward-backward asymmetry of the top quark production.
Recall that, in a collision of a quark and an anti-quark producing a top quark together with its antiparticle, the top quark is more often ejected in the direction of the incoming quark (as opposed to the anti-quark). This effect can be most easily studied at the Tevatron who was colliding protons with antiprotons, therefore the direction of the quark and of the anti-quark could be easily inferred. Indeed, the Tevatron experiments observed the asymmetry at a high confidence level. In the leading order approximation, the Standard Model predicts zero asymmetry, which boils down to the fact that gluons mediating the production process couple with the same strength to left- and right-handed quark polarizations. Taking into account quantum corrections at 1 loop leads to a small but non-zero asymmetry.
Intriguingly, the asymmetry measured at the Tevatron appeared to be large, of order 20%, significantly more than the value  predicted by the Standard Model loop effects. On top of this, the distribution of the asymmetry as a function of the top-pair invariant mass, and the angular distribution of leptons from top quark decay were strongly deviating from the Standard Model expectation. All in all, the ttbar forward-backward anomaly has been considered, for many years, one of our best hints for physics beyond the Standard Model. The asymmetry could be interpreted, for example, as  being due to new heavy resonances with the quantum numbers of the gluon, which are predicted by models where quarks are composite objects. However, the story has been getting less and less  exciting lately. First of all, no other top quark observables  (like e.g. the total production cross section) were showing any deviations, neither at the Tevatron nor at the LHC. Another worry was that the related top asymmetry was not observed at the LHC. At the same time, the Tevatron numbers have been evolving in a worrisome direction: as the Standard Model computation was being refined the prediction was going up; on the other hand, the experimental value was steadily going down as more data were being added. Today we are close to the point where the Standard Model and experiment finally meet...

The final straw is two recent updates from Tevatron's D0 experiment. Earlier this year, D0 published the measurement  of  the forward-backward asymmetry of the direction of the leptons
from top quark decays. The top quark sometimes decays leptonically, to a b-quark, a neutrino, and a charged lepton (e+, μ+).  In this case, the momentum of the lepton is to some extent correlated with that of the parent top, thus the top quark asymmetry may come together with the lepton asymmetry  (although some new physics models affect the top and lepton asymmetry in a completely different way). The previous D0 measurement showed a large, more than 3 sigma, excess in that observable. The new refined analysis using the full dataset reaches a different conclusion: the asymmetry is Al=(4.2 ± 2.4)%, in a good agreement with the Standard Model.  As can be seen in the picture,  none of the CDF and D0 measurement of the lepton asymmetry in several  final states shows any anomaly at this point.  Then came the D0 update of the regular ttbar forward-backward asymmetry in the semi-leptonic channel. Same story here: the number went down from 20% down to Att=(10.6  ± 3.0)%, compared to the Standard Model prediction of 9%. CDF got a slightly larger number here, Att=(16.4 ± 4.5)%, but taken together the results are not significantly above the Standard Model prediction of Att=9%.

So, all the current data on the top quark, both from the LHC and from the Tevatron,  are perfectly consistent with the Standard Model predictions. There may be new physics somewhere at the weak scale, but we're not gonna pin it down by measuring the top asymmetry. This one is a dead parrot:



Graphics borrowed from this talk

Sunday, 25 May 2014

Weekend plot: BICEP limits on tensor modes

 The insurgency gathers pace. This weekend we contemplate a plot from the recent paper of Michael Mortonson and Uroš Seljak:

It shows the parameter space of inflationary models in the plane of the spectral index ns vs. the tensor-to-scalar ratio r. The yellow region is derived from Planck CMB temperature and WMAP polarization data, while the purple regions combine those with the BICEP2 data. Including BICEP gives a stronger constraint on the tensor modes, rather than a detection of r≠0.

The limits on r from Planck temperature data are dominated by large angular scales (low l) data which themselves display an anomaly, so  they should be taken with a grain of salt. The interesting claim here is that BICEP alone does not hint at r≠0, after using the most up-to-date information on galactic foregrounds and marginalizing over current uncertainties. In this respect, the paper by Michael and Uroš reaches similar conclusions as the analysis of Raphael Flauger and collaborators. The BICEP collaboration originally found that the galactic dust foreground can account for at most 25% of their signal. However, judging from scavenged Planck polarization data, it appears that BICEP underestimated the dust polarization fraction by roughly a factor 2. As this enters in square in the B-mode correlation spectrum, dust can easily account for all the signal observed in BICEP2. The new paper adds a few interesting details to the story. One is that not only the normalization but also the shape of the BICEP spectrum can be reasonably explained by dust if it scales as l^-2.3, as suggested by Planck data. Another is the importance of gravitational lensing effects (neglected by BICEP) in extracting the signal of tensor modes.  Although lensing dominates at high l, it also helps to fit the low l BICEP2 data with r=0. Finally, the paper suggests that it is not at all certain that the forthcoming Planck data will clean up the situation. If the uncertainty on the dust foreground in the BICEP patch is of order 20%, which look like a reasonable figure, the improvement over the current sensitivity to tensor modes may be marginal. So, BICEP may remain a Schrödinger cat for a little while longer.

Friday, 16 May 2014

Follow up on BICEP

The BICEP2 collaboration claims the discovery of the primordial B-mode in the CMB at a very high confidence level.  Résonaances recently reported on the chinese whispers that cast doubts about the statistical significance of that result.  They were based in part on the work of Raphael Flauger and Colin Hill, rumors of which were spreading through email and coffee time discussions. Today Raphael gave a public seminar describing this analysis, see the slides and the video.

The familiar number r=0.2 for the CMB tensor-to-scalar ratio is based on the assumption of zero foreground contribution in the region of the sky observed by BICEP. To argue that foregrounds should not be a big effect, the BICEP paper studied several models to estimate the galactic dust emission. Of those, only the data driven models DDM1 and DDM2 were based actual polarization data inadvertently shared by Planck. However, even these models suggest that foregrounds are not completely negligible. For example, subtracting the foregrounds estimated via DDM2 brings the central value of r down to 0.16 or 0.12 depending how the model is used (cross-correlation vs. auto-correlation). If, instead,  the cross-correlated  BICEP2 and Keck Array data are used as an input, the tensor-to-scalar ratio can easily be below 0.1, in agreement with the existing bounds from Planck and WMAP.

Raphael's message is that, according to his analysis, the foreground emissions are larger than estimated by BICEP, and that systematic uncertainties on that estimate (due to incomplete information, modeling uncertainties, and scraping numbers from pdf slides) are also large. If that is true, the statistical significance of the primordial B-mode  detection is much weaker than what is being claimed by BICEP.

In his talk, Raphael described an independent and what is the most complete to date attempt to extract the foregrounds from existing data. Apart from using the same Planck's polarization fraction map as BICEP, he also included the Q and U all-sky map (the letters refer to how polarization is parameterized), and models of polarized dust emission based on  HI maps (21cm hydrogen line emission is supposed to track the galactic dust).  One reason for the discrepancy with the BICEP estimates could be that the effect of the Cosmic Infrared Background - mostly unpolarized emission from faraway galaxies - is non-negligible. The green band in the plot shows the polarized dust emission obtained from the  CIB corrected DDM2 model, and compares it to the original BICEP estimate (blue dashed line).

The analysis then goes on to extract the foregrounds starting from several different premises. All available datasets (polarization reconstructed via HI maps, the information scraped from existing Planck's polarization maps) seem to say a similar story: galactic foregrounds can be large in the region of interest and uncertainties are large.  The money plot is this one:

Recall that the primordial B-mode signal should show up at moderate angular scales with l∼100 (the high-l end is dominated by non-primordial B-modes from gravitational lensing). Given the current uncertainties, the foreground emission may easily account for the entire BICEP2 signal in that region. Again, this does not prove that tensor mode cannot be there. The story may still reach a happy ending, much like the one of  the discovery of accelerated expansion (where serious doubts about systematic uncertainties also were raised after the initial announcement). But the ball is on the BICEP side to convincingly demonstrate that foregrounds are under control.

Until that happens, I think their result does not stand.