Particle-in-cell

Summary

In plasma physics, the particle-in-cell (PIC) method refers to a technique used to solve a certain class of partial differential equations. In this method, individual particles (or fluid elements) in a Lagrangian frame are tracked in continuous phase space, whereas moments of the distribution such as densities and currents are computed simultaneously on Eulerian (stationary) mesh points.

PIC methods were already in use as early as 1955,[1] even before the first Fortran compilers were available. The method gained popularity for plasma simulation in the late 1950s and early 1960s by Buneman, Dawson, Hockney, Birdsall, Morse and others. In plasma physics applications, the method amounts to following the trajectories of charged particles in self-consistent electromagnetic (or electrostatic) fields computed on a fixed mesh. [2]

Technical aspects edit

For many types of problems, the classical PIC method invented by Buneman, Dawson, Hockney, Birdsall, Morse and others is relatively intuitive and straightforward to implement. This probably accounts for much of its success, particularly for plasma simulation, for which the method typically includes the following procedures:

  • Integration of the equations of motion.
  • Interpolation of charge and current source terms to the field mesh.
  • Computation of the fields on mesh points.
  • Interpolation of the fields from the mesh to the particle locations.

Models which include interactions of particles only through the average fields are called PM (particle-mesh). Those which include direct binary interactions are PP (particle-particle). Models with both types of interactions are called PP-PM or P3M.

Since the early days, it has been recognized that the PIC method is susceptible to error from so-called discrete particle noise. [3] This error is statistical in nature, and today it remains less-well understood than for traditional fixed-grid methods, such as Eulerian or semi-Lagrangian schemes.

Modern geometric PIC algorithms are based on a very different theoretical framework. These algorithms use tools of discrete manifold, interpolating differential forms, and canonical or non-canonical symplectic integrators to guarantee gauge invariant and conservation of charge, energy-momentum, and more importantly the infinitely dimensional symplectic structure of the particle-field system. [4][5] These desired features are attributed to the fact that geometric PIC algorithms are built on the more fundamental field-theoretical framework and are directly linked to the perfect form, i.e., the variational principle of physics.

Basics of the PIC plasma simulation technique edit

Inside the plasma research community, systems of different species (electrons, ions, neutrals, molecules, dust particles, etc.) are investigated. The set of equations associated with PIC codes are therefore the Lorentz force as the equation of motion, solved in the so-called pusher or particle mover of the code, and Maxwell's equations determining the electric and magnetic fields, calculated in the (field) solver.

Super-particles edit

The real systems studied are often extremely large in terms of the number of particles they contain. In order to make simulations efficient or at all possible, so-called super-particles are used. A super-particle (or macroparticle) is a computational particle that represents many real particles; it may be millions of electrons or ions in the case of a plasma simulation, or, for instance, a vortex element in a fluid simulation. It is allowed to rescale the number of particles, because the acceleration from the Lorentz force depends only on the charge-to-mass ratio, so a super-particle will follow the same trajectory as a real particle would.

The number of real particles corresponding to a super-particle must be chosen such that sufficient statistics can be collected on the particle motion. If there is a significant difference between the density of different species in the system (between ions and neutrals, for instance), separate real to super-particle ratios can be used for them.

The particle mover edit

Even with super-particles, the number of simulated particles is usually very large (> 105), and often the particle mover is the most time consuming part of PIC, since it has to be done for each particle separately. Thus, the pusher is required to be of high accuracy and speed and much effort is spent on optimizing the different schemes.

The schemes used for the particle mover can be split into two categories, implicit and explicit solvers. While implicit solvers (e.g. implicit Euler scheme) calculate the particle velocity from the already updated fields, explicit solvers use only the old force from the previous time step, and are therefore simpler and faster, but require a smaller time step. In PIC simulation the leapfrog method is used, a second-order explicit method. [6] Also the Boris algorithm is used which cancel out the magnetic field in the Newton-Lorentz equation.[7][8]

For plasma applications, the leapfrog method takes the following form:

 
 

where the subscript   refers to "old" quantities from the previous time step,   to updated quantities from the next time step (i.e.  ), and velocities are calculated in-between the usual time steps  .

The equations of the Boris scheme which are substitute in the above equations are:

 
 

with

 
 
 
 

and  .

Because of its excellent long term accuracy, the Boris algorithm is the de facto standard for advancing a charged particle. It was realized that the excellent long term accuracy of nonrelativistic Boris algorithm is due to the fact it conserves phase space volume, even though it is not symplectic. The global bound on energy error typically associated with symplectic algorithms still holds for the Boris algorithm, making it an effective algorithm for the multi-scale dynamics of plasmas. It has also been shown [9] that one can improve on the relativistic Boris push to make it both volume preserving and have a constant-velocity solution in crossed E and B fields.

The field solver edit

The most commonly used methods for solving Maxwell's equations (or more generally, partial differential equations (PDE)) belong to one of the following three categories:

With the FDM, the continuous domain is replaced with a discrete grid of points, on which the electric and magnetic fields are calculated. Derivatives are then approximated with differences between neighboring grid-point values and thus PDEs are turned into algebraic equations.

Using FEM, the continuous domain is divided into a discrete mesh of elements. The PDEs are treated as an eigenvalue problem and initially a trial solution is calculated using basis functions that are localized in each element. The final solution is then obtained by optimization until the required accuracy is reached.

Also spectral methods, such as the fast Fourier transform (FFT), transform the PDEs into an eigenvalue problem, but this time the basis functions are high order and defined globally over the whole domain. The domain itself is not discretized in this case, it remains continuous. Again, a trial solution is found by inserting the basis functions into the eigenvalue equation and then optimized to determine the best values of the initial trial parameters.

Particle and field weighting edit

The name "particle-in-cell" originates in the way that plasma macro-quantities (number density, current density, etc.) are assigned to simulation particles (i.e., the particle weighting). Particles can be situated anywhere on the continuous domain, but macro-quantities are calculated only on the mesh points, just as the fields are. To obtain the macro-quantities, one assumes that the particles have a given "shape" determined by the shape function

 

where   is the coordinate of the particle and   the observation point. Perhaps the easiest and most used choice for the shape function is the so-called cloud-in-cell (CIC) scheme, which is a first order (linear) weighting scheme. Whatever the scheme is, the shape function has to satisfy the following conditions: [10] space isotropy, charge conservation, and increasing accuracy (convergence) for higher-order terms.

The fields obtained from the field solver are determined only on the grid points and can't be used directly in the particle mover to calculate the force acting on particles, but have to be interpolated via the field weighting:

 

where the subscript   labels the grid point. To ensure that the forces acting on particles are self-consistently obtained, the way of calculating macro-quantities from particle positions on the grid points and interpolating fields from grid points to particle positions has to be consistent, too, since they both appear in Maxwell's equations. Above all, the field interpolation scheme should conserve momentum. This can be achieved by choosing the same weighting scheme for particles and fields and by ensuring the appropriate space symmetry (i.e. no self-force and fulfilling the action-reaction law) of the field solver at the same time[10]

Collisions edit

As the field solver is required to be free of self-forces, inside a cell the field generated by a particle must decrease with decreasing distance from the particle, and hence inter-particle forces inside the cells are underestimated. This can be balanced with the aid of Coulomb collisions between charged particles. Simulating the interaction for every pair of a big system would be computationally too expensive, so several Monte Carlo methods have been developed instead. A widely used method is the binary collision model,[11] in which particles are grouped according to their cell, then these particles are paired randomly, and finally the pairs are collided.

In a real plasma, many other reactions may play a role, ranging from elastic collisions, such as collisions between charged and neutral particles, over inelastic collisions, such as electron-neutral ionization collision, to chemical reactions; each of them requiring separate treatment. Most of the collision models handling charged-neutral collisions use either the direct Monte-Carlo scheme, in which all particles carry information about their collision probability, or the null-collision scheme,[12][13] which does not analyze all particles but uses the maximum collision probability for each charged species instead.

Accuracy and stability conditions edit

As in every simulation method, also in PIC, the time step and the grid size must be well chosen, so that the time and length scale phenomena of interest are properly resolved in the problem. In addition, time step and grid size affect the speed and accuracy of the code.

For an electrostatic plasma simulation using an explicit time integration scheme (e.g. leapfrog, which is most commonly used), two important conditions regarding the grid size   and the time step   should be fulfilled in order to ensure the stability of the solution:

 
 

which can be derived considering the harmonic oscillations of a one-dimensional unmagnetized plasma. The latter conditions is strictly required but practical considerations related to energy conservation suggest to use a much stricter constraint where the factor 2 is replaced by a number one order of magnitude smaller. The use of   is typical.[10][14] Not surprisingly, the natural time scale in the plasma is given by the inverse plasma frequency   and length scale by the Debye length  .

For an explicit electromagnetic plasma simulation, the time step must also satisfy the CFL condition:

 

where  , and   is the speed of light.

Applications edit

Within plasma physics, PIC simulation has been used successfully to study laser-plasma interactions, electron acceleration and ion heating in the auroral ionosphere, magnetohydrodynamics, magnetic reconnection, as well as ion-temperature-gradient and other microinstabilities in tokamaks, furthermore vacuum discharges, and dusty plasmas.

Hybrid models may use the PIC method for the kinetic treatment of some species, while other species (that are Maxwellian) are simulated with a fluid model.

PIC simulations have also been applied outside of plasma physics to problems in solid and fluid mechanics. [15][16]

Electromagnetic particle-in-cell computational applications edit

Computational application Web site License Availability Canonical Reference
SHARP [17] Proprietary doi:10.3847/1538-4357/aa6d13
ALaDyn [18] GPLv3+ Open Repo:[19] doi:10.5281/zenodo.49553
EPOCH [20] GPLv3 Open Repo:[21] doi:10.1088/0741-3335/57/11/113001
FBPIC [22] 3-Clause-BSD-LBNL Open Repo:[23] doi:10.1016/j.cpc.2016.02.007
LSP [24] Proprietary Available from ATK doi:10.1016/S0168-9002(01)00024-9
MAGIC [25] Proprietary Available from ATK doi:10.1016/0010-4655(95)00010-D
OSIRIS [26] GNU AGPL Open Repo [27] doi:10.1007/3-540-47789-6_36
PICCANTE [28] GPLv3+ Open Repo:[29] doi:10.5281/zenodo.48703
PICLas [30] GPLv3+ Open Repo:[31] doi:10.1016/j.crme.2014.07.005

doi:10.1063/1.5097638

PIConGPU [32] GPLv3+ Open Repo:[33] doi:10.1145/2503210.2504564
SMILEI [34] CeCILL-B Open Repo:[35] doi:10.1016/j.cpc.2017.09.024
iPIC3D [36] Apache License 2.0 Open Repo:[37] doi:10.1016/j.matcom.2009.08.038
The Virtual Laser Plasma Lab (VLPL) [38] Proprietary Unknown doi:10.1017/S0022377899007515
Tristan v2 [39] 3-Clause-BSD Open source,[40] but also has a private version with QED/radiative[41] modules doi:10.5281/zenodo.7566725 [42]
VizGrain [43] Proprietary Commercially available from Esgee Technologies Inc.
VPIC [44] 3-Clause-BSD Open Repo:[45] doi:10.1063/1.2840133
VSim (Vorpal) [46] Proprietary Available from Tech-X Corporation doi:10.1016/j.jcp.2003.11.004
Warp [47] 3-Clause-BSD-LBNL Open Repo:[48] doi:10.1063/1.860024
WarpX [49] 3-Clause-BSD-LBNL Open Repo:[50] doi:10.1016/j.nima.2018.01.035
ZPIC [51] AGPLv3+ Open Repo:[52]
ultraPICA Proprietary Commercially available from Plasma Taiwan Innovation Corporation.

See also edit

References edit

  1. ^ F.H. Harlow (1955). "A Machine Calculation Method for Hydrodynamic Problems". Los Alamos Scientific Laboratory report LAMS-1956. {{cite journal}}: Cite journal requires |journal= (help)
  2. ^ Dawson, J.M. (1983). "Particle simulation of plasmas". Reviews of Modern Physics. 55 (2): 403–447. Bibcode:1983RvMP...55..403D. doi:10.1103/RevModPhys.55.403.
  3. ^ Hideo Okuda (1972). "Nonphysical noises and instabilities in plasma simulation due to a spatial grid". Journal of Computational Physics. 10 (3): 475–486. Bibcode:1972JCoPh..10..475O. doi:10.1016/0021-9991(72)90048-4.
  4. ^ Qin, H.; Liu, J.; Xiao, J.; et al. (2016). "Canonical symplectic particle-in-cell method for long-term large-scale simulations of the Vlasov-Maxwell system". Nuclear Fusion. 56 (1): 014001. arXiv:1503.08334. Bibcode:2016NucFu..56a4001Q. doi:10.1088/0029-5515/56/1/014001. S2CID 29190330.
  5. ^ Xiao, J.; Qin, H.; Liu, J.; et al. (2015). "Explicit high-order non-canonical symplectic particle-in-cell algorithms for Vlasov-Maxwell systems". Physics of Plasmas. 22 (11): 12504. arXiv:1510.06972. Bibcode:2015PhPl...22k2504X. doi:10.1063/1.4935904. S2CID 12893515.
  6. ^ Birdsall, Charles K.; A. Bruce Langdon (1985). Plasma Physics via Computer Simulation. McGraw-Hill. ISBN 0-07-005371-5.
  7. ^ Boris, J.P. (November 1970). "Relativistic plasma simulation-optimization of a hybrid code". Proceedings of the 4th Conference on Numerical Simulation of Plasmas. Naval Res. Lab., Washington, D.C. pp. 3–67.
  8. ^ Qin, H.; et al. (2013). "Why is Boris algorithm so good?" (PDF). Physics of Plasmas. 20 (5): 084503. Bibcode:2013PhPl...20h4503Q. doi:10.1063/1.4818428.
  9. ^ Higuera, Adam V.; John R. Cary (2017). "Structure-preserving second-order integration of relativistic charged particle trajectories in electromagnetic fields". Physics of Plasmas. 24 (5): 052104. Bibcode:2004JCoPh.196..448N. doi:10.1016/j.jcp.2003.11.004.
  10. ^ a b c Tskhakaya, David (2008). "Chapter 6: The Particle-in-Cell Method". In Fehske, Holger; Schneider, Ralf; Weiße, Alexander (eds.). Computational Many-Particle Physics. Lecture Notes in Physics 739. Vol. 739. Springer, Berlin Heidelberg. doi:10.1007/978-3-540-74686-7. ISBN 978-3-540-74685-0.
  11. ^ Takizuka, Tomonor; Abe, Hirotada (1977). "A binary collision model for plasma simulation with a particle code". Journal of Computational Physics. 25 (3): 205–219. Bibcode:1977JCoPh..25..205T. doi:10.1016/0021-9991(77)90099-7.
  12. ^ Birdsall, C.K. (1991). "Particle-in-cell charged-particle simulations, plus Monte Carlo collisions with neutral atoms, PIC-MCC". IEEE Transactions on Plasma Science. 19 (2): 65–85. Bibcode:1991ITPS...19...65B. doi:10.1109/27.106800. ISSN 0093-3813.
  13. ^ Vahedi, V.; Surendra, M. (1995). "A Monte Carlo collision model for the particle-in-cell method: applications to argon and oxygen discharges". Computer Physics Communications. 87 (1–2): 179–198. Bibcode:1995CoPhC..87..179V. doi:10.1016/0010-4655(94)00171-W. ISSN 0010-4655.
  14. ^ Tskhakaya, D.; Matyash, K.; Schneider, R.; Taccogna, F. (2007). "The Particle-In-Cell Method". Contributions to Plasma Physics. 47 (8–9): 563–594. Bibcode:2007CoPP...47..563T. doi:10.1002/ctpp.200710072. S2CID 221030792.
  15. ^ Liu, G.R.; M.B. Liu (2003). Smoothed Particle Hydrodynamics: A Meshfree Particle Method. World Scientific. ISBN 981-238-456-1.
  16. ^ Byrne, F. N.; Ellison, M. A.; Reid, J. H. (1964). "The particle-in-cell computing method for fluid dynamics". Methods Comput. Phys. 3 (3): 319–343. Bibcode:1964SSRv....3..319B. doi:10.1007/BF00230516. S2CID 121512234.
  17. ^ Shalaby, Mohamad; Broderick, Avery E.; Chang, Philip; Pfrommer, Christoph; Lamberts, Astrid; Puchwein, Ewald (23 May 2017). "SHARP: A Spatially Higher-order, Relativistic Particle-in-Cell Code". The Astrophysical Journal. 841 (1): 52. arXiv:1702.04732. Bibcode:2017ApJ...841...52S. doi:10.3847/1538-4357/aa6d13. S2CID 119073489.
  18. ^ "ALaDyn". ALaDyn. Retrieved 1 December 2017.
  19. ^ "ALaDyn: A High-Accuracy PIC Code for the Maxwell-Vlasov Equations". GitHub.com. 18 November 2017. Retrieved 1 December 2017.
  20. ^ "EPOCH". epochpic. Retrieved 14 March 2024.
  21. ^ "EPOCH". GitHub.com. Retrieved 14 March 2024.
  22. ^ "FBPIC documentation — FBPIC 0.6.0 documentation". fbpic.github.io. Retrieved 1 December 2017.
  23. ^ "fbpic: Spectral, quasi-3D Particle-In-Cell code, for CPU and GPU". GitHub.com. 8 November 2017. Retrieved 1 December 2017.
  24. ^ "Orbital ATK". Mrcwdc.com. Retrieved 1 December 2017.
  25. ^ "Orbital ATK". Mrcwdc.com. Retrieved 1 December 2017.
  26. ^ "OSIRIS open-source - OSIRIS". osiris-code.github.io. Retrieved 13 December 2023.
  27. ^ "osiris-code/osiris: OSIRIS Particle-In-Cell code". GitHub.com. Retrieved 13 December 2023.
  28. ^ "Piccante". Aladyn.github.io. Retrieved 1 December 2017.
  29. ^ "piccante: a spicy massively parallel fully-relativistic electromagnetic 3D particle-in-cell code". GitHub.com. 14 November 2017. Retrieved 1 December 2017.
  30. ^ "PICLas".
  31. ^ "piclas-framework/piclas". GitHub.
  32. ^ "PIConGPU - Particle-in-Cell Simulations for the Exascale Era - Helmholtz-Zentrum Dresden-Rossendorf, HZDR". picongpu.hzdr.de. Retrieved 1 December 2017.
  33. ^ "ComputationalRadiationPhysics / PIConGPU — GitHub". GitHub.com. 28 November 2017. Retrieved 1 December 2017.
  34. ^ "Smilei — A Particle-In-Cell code for plasma simulation". Maisondelasimulation.fr. Retrieved 1 December 2017.
  35. ^ "SmileiPIC / Smilei — GitHub". GitHub.com. 29 October 2019. Retrieved 29 October 2019.
  36. ^ Markidis, Stefano; Lapenta, Giovanni; Rizwan-uddin (17 Oct 2009). "Multi-scale simulations of plasma with iPIC3D". Mathematics and Computers in Simulation. 80 (7): 1509. doi:10.1016/j.matcom.2009.08.038.
  37. ^ "iPic3D — GitHub". GitHub.com. 31 January 2020. Retrieved 31 January 2020.
  38. ^ Dreher, Matthias. "Relativistic Laser Plasma". 2.mpq.mpg.de. Retrieved 1 December 2017.
  39. ^ "Tristan v2 wiki | Tristan v2". princetonuniversity.github.io. Retrieved 2022-12-15.
  40. ^ "Tristan v2 public github page". GitHub.
  41. ^ "QED Module | Tristan v2". princetonuniversity.github.io. Retrieved 2022-12-15.
  42. ^ "Tristan v2: Citation.md". GitHub.
  43. ^ "VizGrain". esgeetech.com. Retrieved 1 December 2017.
  44. ^ "VPIC". github.com. Retrieved 1 July 2019.
  45. ^ "LANL / VPIC — GitHub". github.com. Retrieved 29 October 2019.
  46. ^ "Tech-X - VSim". Txcorp.com. Retrieved 1 December 2017.
  47. ^ "Warp". warp.lbl.gov. Retrieved 1 December 2017.
  48. ^ "berkeleylab / Warp — Bitbucket". bitbucket.org. Retrieved 1 December 2017.
  49. ^ "WarpX Documentation". ecp-warpx.github.io. Retrieved 29 October 2019.
  50. ^ "ECP-WarpX / WarpX — GitHub". GitHub.org. Retrieved 29 October 2019.
  51. ^ "Educational Particle-In-Cell code suite". picksc.idre.ucla.edu. Retrieved 29 October 2019.
  52. ^ "ricardo-fonseca / ZPIC — GitHub". GitHub.org. Retrieved 29 October 2019.

Bibliography edit

  • Birdsall, Charles K.; A. Bruce Langdon (1985). Plasma Physics via Computer Simulation. McGraw-Hill. ISBN 0-07-005371-5.
  • Hockney, Roger W.; James W. Eastwood (1988). Computer Simulation Using Particles. CRC Press. ISBN 0-85274-392-0.

External links edit

  • Beam, Plasma & Accelerator Simulation Toolkit (BLAST)
  • Particle-In-Cell and Kinetic Simulation Software Center (PICKSC), UCLA.
  • Open source 3D Particle-In-Cell code for spacecraft plasma interactions (mandatory user registration required).
  • Simple Particle-In-Cell code in MATLAB
  • Plasma Theory and Simulation Group (Berkeley) Contains links to freely available software.
  • Introduction to PIC codes (Univ. of Texas)
  • open-pic - 3D Hybrid Particle-In-Cell simulation of plasma dynamics