Thursday, 19 November 2015

Leptoquarks strike back

Leptoquarks are hypothetical scalar particles that carry both color and electroweak charges. Nothing like that exists in the Standard Model, where the only scalar is the Higgs who is a color singlet. In the particle community, leptoquarks enjoy the similar status as Nickelback in music: everybody's heard of them, but no one likes them.  It is not completely clear why... maybe they are confused with leprechauns, maybe  because they sometimes lead to proton decay, or maybe because they rarely arise in cherished models of new physics.  However,  recently there has been some renewed interest in leptoquarks.  The reason is that these particles seem well equipped to address the hottest topic of this year - the B meson anomalies.

There are at least 3 distinct B-meson anomalies that are currently intriguing:
  1.  A few sigma (2 to 4, depending who you ask) deviation in differential distribution of B → K*μμ decays, 
  2.  2.6 sigma violation of  lepton flavor universality in  B → Kμμ vs B → Kee decays, 
  3.  3.5 sigma violation of lepton flavor universality, but this time in  B → Dτν vs B → Dμν decays. 
Now, leptoquarks with masses in the TeV ballpark can explain either of these anomalies.  How? In analogy to the Higgs, leptoquarks may interact with the Standard Model fermions via Yukawa couplings. Which interactions  are possible is determined by  its color and electroweak charges. For example, this paper proposed a leptoquark transforming as (3,2,1/6) under the Standard Model gauge symmetry (color SU(3) triplet like quarks, weak SU(2) doublet like Higgs,  hypercharge 1/6).  Such particle can have the following Yukawa couplings with b- and s-quarks and muons:
 If both  λb and λs  are non-zero then a tree-level leptoquark exchange can mediate the b-quark decay  b → s μ μ.  This contribution  adds up to the Standard Model amplitudes  mediated by loops of W bosons, and thus affects the B-meson observables. It turns out that the first two anomalies listed above can be fit if the leptoquark mass is in the 1-50 TeV range, depending on the magnitude of λb and λs.

Also the 3rd anomaly above can be easily  explained by leptoquarks. One example from this paper is a leptoquark transforming as (3,1,-1/3) and coupling to matter as

This particle contributes to  b → c τ ν, adding up to the tree-level W boson contribution, and is capable of explaining the apparent excess of semi-leptonic B meson decays into D mesons and tau leptons observed by the BaBar, Belle, and LHCb experiments. The difference to the previous case is that this leptoquark has to be less massive, closer to the TeV scale, because it has to compete with the tree-level contribution in the Standard Model.

There are more kinds of leptoquarks with different charges that allow for Yukawa couplings to matter. Some of them could also explain the 3 sigma discrepancy of the experimentally measured muon anomalous magnetic moment with the Standard Model prediction. Actually, a recent paper says that the (3,1,-1/3) leptoquark discussed above can explain all B-meson and muon g-2 anomalies simultaneously, through a combination of tree-level and loop effects.  In any case, this is something to look out for in this and next year's data.  If a leptoquark is indeed the culprit for the B → Dτν excess, it should be within reach of the 13 TeV run (for the 1st two anomalies it may well be too heavy to produce at the LHC).   The current reach for leptoquarks is up to 1 TeV mass (strongly depending on model details),  see e.g. the recent ATLAS and CMS analyses. So far these searches have provoked little public interest, but that may change soon...

Thursday, 12 November 2015

A year at 13 TeV

A week ago the LHC finished the 2015 run of 13 TeV proton collisions.  The counter in ATLAS stopped exactly at 4 inverse femtobarns. CMS reports just 10% less, however it is not clear what fraction of these data is collected with their magnet on (probably about a half). Anyway, it should have been better, it could have been worse...   4 fb-1 is one fifth of what ATLAS and CMS collected in the glorious year 2012.  On the other hand, the higher collision energy in 2015 translates to larger production cross sections, even for particles within the kinematic reach of the 8 TeV collisions.  How this trade off work in practice depends on the studied process.  A few examples are shown in the plot below
We see that, for processes initiated by collisions of a quark inside one proton with an antiquark inside the other proton, the cross section gain is the least favorable. Still, for hypothetical resonances heavier than ~1.7 TeV, more signal events were produced in the 2015 run than in the previous one. For example, for a 2 TeV W-prime resonance, possibly observed by ATLAS in the 8 TeV data, the net gain is 50%, corresponding to roughly 15 events predicted in the 13 TeV data. However, the plot does not tell the whole story, because the backgrounds have increased as well.  Moreover, when the main background originates from gluon-gluon collisions (as is the case for the W-prime search in the hadronic channel),  it grows faster than the signal.  Thus, if the 2 TeV W' is really there, the significance of the signal in the 13 TeV data should be comparable to that in the 8 TeV data in spite of the larger event rate. That will not be enough to fully clarify the situation, but the new data may make the story much more exciting if the excess reappears;  or much less exciting if it does not... When backgrounds are not an issue (for example, for high-mass dilepton resonances) the improvement in this year's data should be more spectacular.

We also see that, for new physics processes initiated by collisions of a gluon in 1 proton with another gluon in the other proton, the 13 TeV run is superior everywhere above the TeV scale, and the signal enhancement is more spectacular. For example, at 2 TeV one gains a factor of 3 in signal rate. Therefore, models where the ATLAS diboson excess is explained via a Higgs-like scalar resonance will be tested very soon. The reach will also be extended for other hypothetical particles pair-produced in gluon collisions, such as  gluinos in the minimal supersymmetric model. The current lower limit on the gluino mass obtained by  the 8 TeV run is m≳1.4 TeV  (for decoupled squarks and massless neutralino). For this mass, the signal gain in the 2015 run is roughly a factor of 6. Hence we can expect the gluino mass limits will be pushed upwards soon, by about 200 GeV or so.  

Summarizing,  we have a right to expect some interesting results during this winter break. The chances for a discovery  in this year's data are non-zero,  and chances for a tantalizing hints of new physics (whether a real thing or a background fluctuation) are considerable. Limits on certain imaginary particles will be somewhat improved. However, contrary to my hopes/fears, this year is not yet the decisive one for particle physics.  The next one will be.

Saturday, 26 September 2015

Weekend Plot: celebration of a femtobarn

The LHC run-2 has reached the psychologically important point where the amount the integrated luminosity exceeds one inverse femtobarn. To celebrate this event, here is a plot showing the ratio of the number of hypothetical resonances produced so far in run-2 and in run-1 collisions as a function of the resonance mass:
In the run-1 at 8 TeV, ATLAS and CMS collected around 20 fb-1. For 13 TeV collisions the amount of data is currently 1/20 of that, however the hypothetical cross section for producing hypothetical TeV scale particles is much larger. For heavy enough particles the gain in cross section is larger than 1/20, which means that run-2 now probes a previously unexplored parameter space (this simplistic argument ignores the fact that backgrounds are also larger at 13 TeV, but it's approximately correct at very high masses where backgrounds are small). Currently, the turning point is about 2.7 TeV for resonances produced, at the fundamental level, in quark-antiquark collisions, and even below that for those produced in gluon-gluon collisions. The current plan is to continue the physics run till early November which, at this pace, should give us around 3 fb-1 to brood upon during the winter break. This means that the 2015 run will stop short before sorting out the existence of the 2 TeV di-boson resonance indicated by run-1 data. Unless, of course, the physics run is extended at the expense of heavy-ion collisions scheduled for November ;)

Saturday, 12 September 2015

What can we learn from LHC Higgs combination

Recently, ATLAS and CMS released the first combination of their Higgs results. Of course, one should not expect any big news here: combination of two datasets that agree very well with the Standard Model predictions has to agree very well with the Standard Model predictions...  However, it is interesting to ask what the new results change at the quantitative level concerning our constraints on Higgs boson couplings to matter.

First, experiments quote the overall signal strength μ, which measures how many Higgs events were detected at the LHC in all possible production and decay channels compared to the expectations in the Standard Model. The latter, by definition, is μ=1.  Now, if you had been impatient to wait for the official combination, you could have made a naive one using the previous ATLAS (μ=1.18±0.14) and CMS (μ=1±0.14) results. Assuming the errors are Gaussian and uncorrelated, one would obtains this way the combined μ=1.09±0.10. Instead, the true number is (drum roll)
So, the official and naive numbers are practically the same.  This result puts important constraints on certain models of new physics. One important corollary is that the Higgs boson branching fraction to invisible (or any undetected exotic) decays is limited as  Br(h → invisible) ≤ 13% at 95% confidence level, assuming the Higgs production is not affected by new physics.

From the fact that, for the overall signal strength, the naive and official combinations coincide one should not conclude that the work ATLAS and CMS has done together is useless. As one can see above, the statistical and systematic errors are comparable for that measurement, therefore a naive combination is not guaranteed to work. It happens in this particular case that the multiple nuisance parameters considered in the analysis pull essentially in random directions. But it could well have been different. Indeed, the more one enters into details, the more the impact of the official combination becomes relevant.  For the signal strength measured in particular final states of the Higgs decay the differences are more pronounced:
One can see that the naive combination somewhat underestimates the errors. Moreover, for the WW final state the central value is shifted by half a sigma (this is mainly because, in this channel, the individual ATLAS and CMS measurements that go into the combination seem to be different than the previously published ones). The difference is even more clearly visible for 2-dimensional fits, where the Higgs production cross section via the gluon fusion (ggf) and vector boson fusion (vbf) are treated as free parameters. This plot compares the regions preferred at 68% confidence level by the official and naive combinations:
There is a significant shift of the WW and also of the ττ ellipse. All in all, the LHC Higgs combination brings no revolution, but it allows one to obtain more precise and more reliable constraints on some new physics models.  The more detailed information is released, the more useful the combined results become.

Sunday, 30 August 2015

Weekend plot: SUSY limits rehashed

Lake Tahoe is famous for preserving dead bodies in good condition over many years,  therefore it is a natural place to organize the SUSY conference. As a tribute to this event, here is a plot from a recent ATLAS meta-analysis:
It shows the constraints on the gluino and the lightest neutralino masses in the pMSSM. Usually, the most transparent way to present experimental limits on supersymmetry is by using simplified models. This consists in picking two or more particles out of the MSSM zoo, and assuming that they are the only ones playing role in the analyzed process. For example, a popular simplified model has a gluino and a stable neutralino interacting via an effective quark-gluino-antiquark-neutralino coupling. In this model, gluino pairs are produced at the LHC through their couplings to ordinary gluons, and then each promptly decays to 2 quarks and  a neutralino via the effective couplings. This shows up in a detector as 4 or more jets and the missing energy carried off by the neutralinos. Within this simplified model, one can thus interpret the LHC multi-jets + missing energy data as constraints on 2 parameters: the gluino mass and  the lightest neutralino mass. One result of this analysis is that, for a massless neutralino, the gluino mass is constrained to be bigger than about 1.4 TeV, see the white line in the plot.

A non-trivial question is what happens to these limits if one starts to fiddle with the remaining one hundred parameters of the MSSM.  ATLAS tackles this question in the framework of the pMSSM,  which is a version of the  MSSM where all flavor and CP violating parameters are set to zero. In the resulting 19-dimensional parameter space,  ATLAS picks a large number of points that reproduce the correct Higgs mass and are consistent with various precision measurements. Then they check what fraction of the points with a given m_gluino and m_neutralino survives the constraints from all ATLAS supersymmetry searches so far. Of course, the results will depend on how the parameter space is sampled, but nevertheless  we can get a feeling of how robust are the limits obtained in simplified models. It is interesting that the gluino mass limits turn out to be quite robust. From the plot one  can see that, for a light neutralino, it is difficult to live with m_gluino < 1.4 TeV, and that there's no surviving points with  m_gluino < 1.1 TeV. Similar conclusion are  not true for all simplified models, e.g.,  the limits on squark masses in simplified models can be very much  relaxed by going to the larger parameter space of the pMSSM. Another thing worth noticing is that the blind spot near the m_gluino=m_neutralino diagonal is not really there: it is covered by ATLAS monojet searches.  

The LHC run-2 is going slow, so we still have some time to play with  the run-1 data. See the ATLAS paper for many more plots. New stronger limits on supersymmetry are not expected before next summer.

Saturday, 15 August 2015

Weekend plot: ATLAS weighs in on Higgs to Tau Mu

After a long summer hiatus, here is a simple warm-up plot:

It displays the results of ATLAS and CMS searches for h→τμ decays, together with their naive combination. The LHC collaborations have already observed Higgs boson decays into two 2 τ leptons, and should be able to pinpoint h→μμ in Run-2. However,  h→τμ decays (and lepton flavor violation in general) are forbidden in  the Standard Model, therefore a detection would be an evidence for exciting new physics around the corner.  Last summer, CMS came up with their 8 TeV result showing a 2.4 sigma hint of the signal. Most likely, this is just another entry in the long list of statistical fluctuations in the LHC run-1 data. Nevertheless, the CMS result is quite intriguing, especially in connection with the LHCb hints of lepton flavor violation in B-meson decays.  Therefore, we have been waiting impatiently for a word from ATLAS. ATLAS is taking his time, but finally they published the first chunk of the result based on hadronic tau decays. Unfortunately, it is very inconclusive.  It shows a small 1 sigma upward fluctuation, hence it does not kill the CMS hint.  At the same time, the combined significance of the h→τμ signal increases only marginally, up to 2.6 sigma.

So, we are still in a limbo.  In the near future, ATLAS should reveal the 8 TeV h→τμ measurement with leptonic tau decays. This may clarify the situation, as the fully leptonic channel is more sensitive (at least, this is the case in the CMS analysis).  But it is possible that for the final clarification we'll have to wait 2 more years, once enough 13 TeV data is analyzed.

Monday, 29 June 2015

Sit down and relaxion

New ideas are rare in particle physics these days. Solutions to the naturalness problem of the Higgs mass are true collector's items. For these reasons, the new mechanism addressing the naturalness problem via cosmological relaxation have stirred a lot of interest in the community. There's already an article explaining the idea in popular terms. Below, I will give you a more technical introduction.

In the Standard Model, the W and Z bosons and fermions get their masses via the Brout-Englert-Higgs mechanism. To this end, the Lagrangian contains  a scalar field H with a negative mass squared  V = - m^2 |H|^2. We know that the value of the parameter m is around 90 GeV - the Higgs boson mass divided by the square root of 2. In quantum field theory,  the mass of a scalar particle is expected to be near the cut-off scale M of the theory, unless there's a symmetry protecting it from quantum corrections.  On the other hand, m much smaller than M, without any reason or symmetry principle, constitutes the naturalness problem. Therefore, the dominant paradigm has been that, around the energy scale of 100 GeV, the Standard Model must be replaced by a new theory in which the parameter m is protected from quantum corrections.  We know several mechanisms that could potentially protect the Higgs mass: supersymmetry, Higgs compositeness, the Goldstone mechanism, extra-dimensional gauge symmetry, and conformal symmetry. However, according to experimentalists, none seems to be realized at the weak scale; therefore, we need to accept that nature is fine-tuned (e.g. susy is just behind the corner), or to seek solace in religion (e.g. anthropics).  Or to find a new solution to the naturalness problem: one that is not fine-tuned and is consistent with experimental data.

Relaxation is a genuinely new solution, even if somewhat contrived. It is based on the following ingredients:
  1.  The Higgs mass term in the potential is V = M^2 |H|^2. That is to say,  the magnitude of the mass term is close to the cut-off of the theory, as suggested by the naturalness arguments. 
  2. The Higgs field is coupled to a new scalar field - the relaxion - whose vacuum expectation value is time-dependent in the early universe, effectively changing the Higgs mass squared during its evolution.
  3. When the mass squared turns negative and electroweak symmetry is broken, a back-reaction mechanism should prevent further time evolution of the relaxion, so that the Higgs mass terms is frozen at a seemingly unnatural value.       
These 3 ingredients can be realized in a toy model where the Standard Model is coupled to the QCD axion. The crucial interactions are  
Then the story goes as follows. The axion Φ starts at a large value such that the Higgs mass term is positive and there's no electroweak symmetry breaking. During inflation its value slowly decreases. Once gΦ < M^2, electroweak symmetry breaking is triggered and the Higgs field acquires a vacuum expectation value.  The crucial point is that the height of the axion potential Λ depends on the light quark masses which in turn depend on the Higgs expectation value v. As the relaxion evolves, v increases, and Λ also increases proportionally, which provides the desired back-reaction. At some point, the slope of the axion potential is neutralized by the rising Λ, and the Higgs expectation value freezes in. The question is now quantitative: is it possible to arrange the freeze-in to happen at the value v well below the cut-off scale M? It turns out the answer is yes, at the cost of choosing strange (though not technically unnatural) theory parameters.  In particular, the dimensionful coupling g between the relaxion and the Higgs has to be less than 10^-20 GeV (for a cut-off scale larger than 10 TeV), the inflation has to last for at least 10^40 e-folds, and the Hubble scale during inflation has to be smaller than the QCD scale.   

The toy-model above ultimately fails. Normally, the QCD axion is introduced so that its expectation value cancels the CP violating θ-term in the Standard Model Lagrangian. But here it is stabilized at a value determined by its coupling to the Higgs field. Therefore, in the toy-model, the axion effectively generates an order one θ-term, in conflict with the experimental bound  θ < 10^-10. Nevertheless, the same  mechanism can be implemented in a realistic model. One possibility is to add new QCD-like interactions with its own axion playing the relaxion role. In addition, one needs new "quarks" charged under the new strong interactions. These masses have to be sensitive to the electroweak scale v, thus providing a back-reaction on the axion potential that terminates its evolution. In such a model, the quantitative details would be a bit different than in the QCD axion toy-model. However, the "strangeness" of the parameters persists in any model constructed so far. Especially, the very low scale of inflation required by the relaxation mechanism is worrisome. Could it be that the naturalness problem is just swept into the realm of poorly understood physics of inflation? The ultimate verdict thus depends on whether a complete and  healthy model incorporating both relaxation and inflation can be constructed.

Certainly TBC.

Thanks to Brian for a great tutorial.