T2K at 91Ö±²¥

The T2K experiment is a second generation neutrino oscillation experiment using the existing Super-Kamiokande experiment as the far detector.

On

Background

A muon-neutrino beam is produced by directing the protons from a high-power 30 GeV proton synchrotron at the  nuclear physics facility at Tokai on the east coast of Japan on to a graphite target designed by UK physicists.

A novel feature of the T2K experimental setup is that the beam is deliberately aimed 2.5° off the direct line to Super-K: this off-axis configuration produces a much more monochromatic beam, and the 2.5° angle was chosen to maximise the oscillation probability.

Diagram showing the distance travelled between the Super-Kamiokande and the near detector by the neutrino beam
The T2K beamline. Muon neutrinos or antineutrinos are produced at J-PARC on the right, and travel 295 km to be detected by Super-K. The near detector at 280 m from the beam production characterises the beam and measures neutrino interaction cross sections

The primary aim of the T2K experiment, achieved in 2011 with the publication of the paper Indication of Electron Neutrino Appearance from an Accelerator-produced Off-axis Muon Neutrino Beam [1], was the measurement of the mixing angle θ13, which at the time that T2K was constructed was the only unmeasured angle in the .

This measurement has since been refined with more data [2], and is the most precise such measurement in the νμ â†’ νe channel (reactor neutrino experiments such as  measure disappearance of the electron antineutrino).

Comparisons of the T2K result with those from reactor experiments provides the first hint that δCP â‰  0; this exciting suggestion will be followed up with more data.

Because of the similarity of the mass differences Δm213 and Δm223, any νμ experiment optimised for measurement of θ13 is also optimised for measuring θ23 via the νμ disappearance channel: indeed, the analysis of electron neutrino appearance data requires a good knowledge of Î¸23

Another of T2K's primary goals was therefore to improve our knowledge of the (23) mixing parameters. The 2017 result [3] is shown in the figure, and demonstrates that the T2K analysis remains world leading.

Graph showing 2017 results for the mixing parameters showing the T2K analysis is world leading compared to others
Compilation of results for the (23) mixing parameters, from T2K's paper of April 2017

The neutrino oscillation results could not be achieved without good control of systematic errors. The T2K near detector, ND280, located on the J-PARC site 280 m from the target, is an essential part of this.

Its tasks are to characterise the unoscillated beam, including its profile, energy spectrum and intrinsic νe component, to improve our understanding of neutrino-nucleus interactions, and to measure any specific processes that contribute to oscillation backgrounds – particularly the production of Ï€0s, which can be mistaken for electrons in Super-K. Most of the work of the 91Ö±²¥ group has focused on ND280 physics.


T2K in the UK

The T2K UK Collaboration consists of groups from Daresbury Laboratory, Imperial College, Lancaster, Liverpool, Oxford, Queen Mary, RAL, Royal Holloway, 91Ö±²¥ and Warwick.

The UK group designed the target and beam dump, the ND280 data acquisition and much of the electronics, and built the ND280 electromagnetic calorimeter (ECal). 91Ö±²¥'s contribution included the testing and evaluation of MPPC photosensors, quality assurance and testing of all the ECal scintillator bars, and the design and development of a light injection system for calibration purposes.

The group also played a leading role in the optimization studies that led to the final design of the so-called P0D ECal, the electromagnetic calorimeters surrounding the central pi-zero detector.

Choose one of the links on the right to learn more about the activities of the 91Ö±²¥ T2K group. For more information about the T2K experiment as a whole, visit .

References

  1. T2K Collaboration, PRL 107 (2011) 041801 (arXiv:1106.2822 [hep-ex]).
  2. T2K Collaboration, PRL 118 (2017) 151801 (; also arXiv: [hep-ex]).
  3. T2K Collaboration, arXiv: [hep-ex].

Neutrino-nucleus interactions

Introduction

The interaction of a relatively low-energy neutrino with a nucleus is not a simple matter of W exchange between a neutrino and a quark. As well as the momentum distribution of nucleons within the nucleus and quarks within the nucleon – significant when the energy of your neutrino is typically < 1 GeV, as in T2K – there are also multi-nucleon effects to contend with.

The figure shows Feynman graphs from the Nieves et al. model of multi-nucleon interactions (known as MEC or "meson exchange currents").

As a result of such effects, and a lack of good experimental data, the models of neutrino-nucleus interactions used in simulations are not complete descriptions of the physics and frequently do not describe the data very well.

This is a critical issue in oscillation analyses because it is necessary to unfold the measured charged lepton energy spectrum back into the neutrino energy spectrum: as multi-nucleon effects can affect the momentum of the outgoing charged lepton, a poor understanding of such effects results in large systematic errors.

The T2K analysis strategy, shown in the flow diagram below, uses both pre-fits to external data and fits to ND280 data to constrain the model for the oscillation fits.

Optimising the physics input to the T2K simulation is the job of the Neutrino Interactions Working Group (NIWG). 91Ö±²¥ students have maintained an extremely high profile in the NIWG and have played a leading role in tuning models of neutrino-nucleus interactions to external datasets.

Modelling neutrino-nucleus interactions

As with most particle physics experiments, T2K data are analysed with the aid of Monte Carlo programs which simulate the physical processes and track the resulting particles through a mathematical model of the detector. 

By comparing the simulation with real data, we can investigate the extent to which our theoretical understanding, as embodied in the event generator, is an accurate representation of the real world.

There are several standard event generators for neutrino interactions, and comparison of predictions from different generators is a key component of systematic error studies in neutrino-nucleus interactions.

T2K normally uses , which is Super-Kamiokande's official package and has been refined over many years, but the standard generator for the rest of the neutrino community is  (which is also used by T2K, but usually for cross-checks and systematic error studies).

Another useful event generator is , which is flexible and easy to customise. Finally, the GiBUU generator has a more sophisticated treatment of transporting the reaction products through the nuclear medium, and is therefore an important benchmark for studies of final-state interactions, but is computationally expensive and therefore not used as the standard generator for production simulations. 

The principal ingredients of a model of neutrino-nucleus interactions are:

  • the nuclear model, describing the initial state of the nucleons within the nucleus
  • the nature of the interaction, whether it be elastic or quasi-elastic scattering off a single nucleon, inelastic scattering off a nucleon (exciting a resonance such as the Δ), coherent scattering off the whole nucleus, or deep inelastic scattering off a constituent quark within the nucleon
  • final-state interactions, where the products of the initial interaction may re-interact with other nucleons before emerging from the nucleus.

In addition, the simulation must take into account nucleon form factors resulting from the fact that nucleons are not fundamental particles, but composite objects consisting of a sea of quarks and gluons occupying a finite spatial extent.

Over the past few years, increasingly sophisticated models of neutrino-nucleus interactions have been developed, making improvements in all these areas.

The  model of nucleon momentum distribution in the 16O nucleus is compared with the simple relativistic Fermi gas; below, the ingredients of the  of neutrino-nucleus interactions are shown in the form of contributions to the W self-energy.

Model tuning

All models of neutrino-nucleus interactions have parameters which must be tuned to best describe the data. As shown in the flow-chart above, in T2K this tuning is done using fits to published data from other experiments.

This avoids too much circular reasoning—you are not tuning your model to data you will subsequently unfold using the same model—but introduces its own issues: data from different experiments may not agree with each other; published covariance matrices may be incomplete; the data as published may have been subjected to model-dependent corrections for acceptance or selection biases.

Nevertheless, it is possible to improve the description of the data very substantially by appropriate tuning of parameters, as shown in the figure on the right.

For more information about this analysis, see .

The NUISANCE project

The plethora of models, generators and datasets makes the tasks of tuning models to data, comparing data to models, and cross-checking different event generators increasingly complex and time-consuming.

The NUISANCE project, led by 91Ö±²¥ student Patrick Stowell, has been set up to provide a common framework in which all members of the neutrino physics community can undertake these tasks.

NUISANCE aims to provide a coherent framework for comparing neutrino generators to external data and tuning model parameters.

It currently supports the NEUT, GENIE, NuWro, GiBUU and NUANCE generators, and can provide consistent comparisons between generators, comparisons of any or all supported generators with any dataset that is implemented in the framework, automated parameter tuning using reweight dials, and systematic error studies for cross-section measurements.

For more information about NUISANCE, see  and the 


Light injection and calibration

Introduction

The electromagnetic calorimeter (ECal) of the T2K Near Detector, ND280, consists of plastic scintillator read out by solid-state devices known as  (MPPCs), which are essentially multi-pixel avalanche photodiodes.  In order to function effectively, the ECal must be effectively calibrated as regards photon yield (for energy calibration) and timing (to ensure that ECal hits are correctly associated with the beam). 

91Ö±²¥ contributes to this effort in two ways:

  • we designed, built and installed a pulsed LED system which can inject short pulses of light into the system
  • we have taken on the role of calibrating the ECal timing system—in particular, of identifying and correcting for so-called "timeslips", where the synchronisation of the various slave clocks to the T2K master clock changes.
The light injection system

The ECal light injection system consists of a series of strips of blue LEDs mounted between the scintillator layers and the outer bulkhead of each ECal module, as shown in the diagram above. The LEDs can be flashed in very short pulses, providing "proof-of-life" and also timing calibration for the ECal MPPCs.

This system was designed and built in 91Ö±²¥ after it was decided, late in the ECal construction phase, that such a system was necessary. As it was retrofitted to an existing design, the system had to fit into a very tight space and be constructed on a short timescale.

Nevertheless, all the systems work effectively, except for the system on the downstream ECal, which may have been damaged during assembly.

The LI system can be run interspersed with beam data, to provide continuous monitoring of the health of the ECal. In this mode, it can in principle be used for the rapid detection of RMM timeslips (see below), although the fact that the ECal timing is usually referred to the DsECal, where there is no working LI system, presents a technical problem.

The LED strips of the LI can be flashed face by face instead of all at once. Flashing one face and reading out the MPPCs at the other end of double-ended bars provides a clean method of measuring the speed of light transmission down the WLS fibres.

Subtracting the near-side time from the far-side time cancels out any delays introduced by the electronics, and repeating the process with the illumination coming from the other side and averaging the two results cancels any electronic offsets between the two ends.

The results show that the light transmission time down the bars can be measured to an accuracy of a few percent. There is a slight but consistent variation with layer number, which is yet to be understood.

Timeslips

The ND280 DAQ is synchronised to a master clock. However, when units are power cycled, the resynchronisation of slave clocks to the master sometimes jumps by 10 ns (four clock ticks).

These so-called "timeslips" can occur at several different levels in the master-slave hierarchy. When they affect the whole detector, they are obviously not serious, but timeslips which affect only single subdetectors or parts of subdetectors must be detected and corrected for during calibration so that hit times across the whole detector can be compared during reconstruction.

Identification of timeslips was originally done manually by scanning the timing data. It is not difficult for the human eye to pick up timeslips, but it is tedious.

As part of the "service work" of their PhDs, Matt Lawe and Leon Pickard successively took on this task, and developed algorithms to identify timeslips automatically. Sample output from the algorithm can be seen in the figure below.

Timeslip detection is performed on samples of cosmic ray muons collected during normal data taking.  The minimum detectable duration of a timeslip is set by the rate of cosmic events.  In principle, the light injection system, if run in "interleaved" mode where LI triggers are interspersed with regular data taking, has the ability to detect much shorter timeslips with high efficiency. 

The difficulty is that the reference ECal module, the DsECal, does not have a working LI system, making it difficult to combine LI results with the rest of the calibration software (and, of course, timeslips affecting RMMs that serve the DsECal cannot be detected by the LI system).  Therefore, to date, ND280 has continued to rely on cosmic samples for timeslip monitoring.


Neutral pion production in the T2K ND280

Introduction

Water Cherenkov detectors like Super-Kamiokande cannot distinguish between electrons and photons: both create an electromagnetic shower with many nearly collinear electrons and positrons above Cherenkov threshold, producing the characteristic fuzzy ring.

Therefore, any process which generates photons in the far detector is a potential background for νe appearance measurements. Neutral pions decay into two photons, one of which may not be reconstructed if it is soft or too nearly collinear with the other photon, so they are a significant potential background.

The cross-section for Ï€0 production in neutrino-nucleus interactions is not well known, so the uncertainty in this background—a systematic error in the appearance measurement—is large.

Better measurements of Ï€0 production cross-sections are therefore an important goal of the ND280 physics programme. This is a major part of the 91Ö±²¥ group's physics analysis.

ND280: the T2K off-axis near detector

The T2K near detector complex consists of two main parts: the on-axis detector INGRID, which is designed to monitor the beam profile and neutrino interaction rate, and the off-axis ND280, which is intended to characterise the unoscillated beam at the same off-axis angle as Super-K and to measure neutrino interaction cross-sections.

ND280 is built around the former UA1 dipole magnet, which in this configuration produces a field of 0.2 T. It consists of two principal subdetectors:

  1. the Pi-0 Detector or P0D, a tracking calorimeter consisting of plastic scintillator bars interleaved with metal foil and water bags (the latter intended to provide a water target for direct comparison with Super-K)
  2. the Tracker, consisting of three time-projection chambers (TPCs) for precision tracking of charged particles, interleaved with two fine-grained detectors (FGDs), constructed of plastic scintillator bars (and water layers in the case of FGD2) and providing an active target for neutrino interactions, and surrounded on five sides by the ECal, a sampling calorimeter of plastic scintillator bars and lead sheets.

In principle, as its name suggests, the P0D is designed for Ï€0 reconstruction while the Tracker is intended to analyse charged-particle final states.

However, the combination of charged-particle tracking and electromagnetic calorimetry does provide the necessary functionality for Ï€0 reconstruction, providing a cross-check of the P0D results with different systematic errors. In addition, the magnet yoke is instrumented with plastic scintillator panels to act as muon detectors, forming the Side Muon Range Detector (SMRD).

Neutral pion production in neutrino interactions

Neutral pions can be produced in neutrino-nucleus interactions through either W exchange (charged current, in which the neutrino converts to the equivalent charged lepton) or Z exchange (neutral current, in which the neutrino transfers energy and momentum but remains a neutrino). 

°Õ³ó±ð&²Ô²ú²õ±è;Ï€0 may be produced singly (CC1Ï€0, e.g. νμ + n → μ+ + p + Ï€0, or NC1Ï€0, e.g. νμ + p → νμ + p + Ï€0) or in conjunction with other mesons.

As regards background to νe appearance in Super-K, the NC1Ï€reaction is the most important, since this produces the final state that is most easily mistaken for an electron.

In CC reactions, the final-state muon would produce a Cherenkov ring which would prevent misidentification; in cases where more than one meson is present in the final state, it is unlikely that the observed event in Super-K would consist only of a single electron-like ring.

However, the NC1Ï€reaction is quite difficult to study in ND280, since the frequent lack of a charged track (there may be a soft proton, but this usually does not travel far and may not be reconstructed) makes it hard to locate the event vertex. For this reason, we also study the closely related CC1Ï€0 process.

Charged current Ï€0&²Ô²ú²õ±è;(°ä°ä1Ï€0) production

In this analysis, the muon was identified using cuts developed by the νμ working group and the Ï€0 was reconstructed from two isolated ECal showers. The event vertex is required to lie in one of the FGDs, and the ECal showers may be detected in the barrel or downstream ECals.

This yields six different event topologies (three ECal combinations, DsDs, DsBrl and BrlBrl, for each FGD) which have different kinematic acceptance and were analysed individually.

After the muon selection, events with two or three isolated ECal clusters were selected as Ï€0 candidates. The principal selection tool was a boosted decision tree (BDT), a form of multivariate analysis. BDTs, neural networks and similar multivariate analyses are valuable because

(1) they can take correlations between variables into account and (2) they are optimised by "training" on samples where the desired outcome is known (generally simulations), so that there is no need for the user to know exactly what the discriminating features are.

The disadvantage is that, precisely because the user does not know exactly what the discriminating features are, the method is susceptible to systematic errors caused by discrepancies between simulation and data, and these systematic errors can be difficult to evaluate.

In this particular case, this is a price worth paying, because the differences between signal and background are quite subtle and a fully cuts-based analysis would be very difficult to implement.

Across all topologies, an overall selection efficiency of 10% and purity of 12% were achieved.  As can be seen in the example invariant mass plot for the selected events shown on the left, the principal background is CCÏ€0+, i.e. events which do indeed include a Ï€0, but accompanied by additional final-state particles. 

The selection in fact produces a purity of 28% for correctly reconstructing a Ï€0 from any final state (more than 28% of the events come from a reaction which included a Ï€0, but not all the Ï€0s in such events were correctly reconstructed.)

For more information about this analysis, please see  (PDF, 255KB).

Neutral current Ï€0&²Ô²ú²õ±è;(±·°ä1Ï€0) production

The final state in an NC1Ï€0 event consists of two photons from the Ï€0 decay and some number of nucleons ejected from the struck nucleus. 

Because the T2K beam is low energy, the ejected nucleons are very soft and frequently not reconstructed (in the worst-case scenario from a reconstruction point of view, the ejected nucleon may be a single neutron, but even soft protons will stop quickly and may not leave a reconstructable track). 

Furthermore, the two photons from Ï€0 decay are of quite low energy, and will therefore produce few hits in the ND280 electromagnetic calorimeter (which was optimized for detecting electrons from νe-induced CC events—an essential check on the νe content of the J-PARC neutrino beam). 

The efficiency for reconstructing the softer of the two photons in the ND280 ECal is therefore quite low, making selection of NC1Ï€0 events very challenging.  In addition, if no charged tracks are present it becomes very difficult to locate the event vertex, as the low-energy ECal clusters produced by the soft Ï€0 photons are small and therefore do not have very well-defined directions.

This is a serious problem, since misreconstructing the vertex both distorts the invariant mass calculation (since the directions of the photons are incorrect) and makes it impossible to define the fiducial volume accurately (with obvious implications for cross-section calculations).

To achieve the highest possible precision in locating the event vertex, a sequential algorithm is used:

  1. If there is at least one TPC track, the vertex is defined to be the start point of the highest momentum TPC track.
  2. If there are no TPC tracks, but there is at least one FGD-only track, the vertex is defined to be the start point of the highest momentum FGD track.
  3. If there are no reconstructed tracks, but there are hits in one or both FGDs which have not been successfully combined into a track, the charge weighted mean position of the hits in the FGD with the higher number of hits is adopted as the vertex.
  4. If there is no FGD activity, the thrust axis of the higher-energy ECal cluster is extrapolated back to the central plane of the appropriate FGD and this is adopted as the vertex.  If the extrapolation is outside the FGD, the procedure is repeated for the lower-energy cluster

As can be seen from the plot, this sequence ensures that the most accurate possible method is used for each event.  Despite this, out-of-fiducial volume events remain the highest background in the selection, in contrast to the CC1Ï€0 analysis where this background is very small.

Apart from the vertexing, the selection proceeds along similar lines to the CC1Ï€0 analysis, requiring two isolated ECal clusters as the photon candidates (events with more than two clusters are rejected to avoid issues with combinatorics) and using a boosted decision tree to reject background.

The overall efficiency of the NC1Ï€0 selection is 34%, with a purity of 23%, calculated with respect to those NC1Ï€0 events where both the decay photons really did convert in the ECal. 

Calculated with respect to the entire NC1Ï€0 sample, including events where at least one photon converted before reaching the ECal, the efficiency goes down to 19% (unsurprising as the selection requires exactly two isolated ECal clusters), but the purity increases to 29% (i.e., some of the "background" is genuine NC1Ï€0 events which should have been reconstructed as containing an e+e– pair from a photon conversion before the ECal, but were not—perhaps because the conversion occurred too far out to yield a reconstructable track). 

As can be seen from the invariant mass plot on the left, the data sample is significantly smaller than the Monte Carlo expectation, although the shapes of distributions are similar. 

As the calculated cross section is in disagreement with an analysis by MiniBooNE, this finding requires confirmation before it can be accepted as a real effect.  However, it does demonstrate the potential importance of measuring this cross section as an input to the Î½e appearance analysis.

For more information about this study, please see .