Welcome to the PySPH documentation!¶
PySPH is an open source framework for Smoothed Particle Hydrodynamics (SPH) simulations. Users can implement an SPH formulation in pure Python and still obtain excellent performance. PySPH can make use of multiple cores via OpenMP or be run seamlessly in parallel using MPI.
Here are some videos of simulations made with PySPH.
PySPH is hosted on github. Please see the github site for development details.
Overview¶
Overview¶
PySPH is an open source framework for Smoothed Particle Hydrodynamics (SPH) simulations. It is implemented in Python and the performance critical parts are implemented in Cython and PyOpenCL.
PySPH is implemented in a way that allows a user to specify the entire SPH simulation in pure Python. High-performance code is generated from this high-level Python code, compiled on the fly and executed. PySPH can use OpenMP to utilize multi-core CPUs effectively. PySPH can work with OpenCL and use your GPGPUs. PySPH also features optional automatic parallelization (multi-CPU) using mpi4py and Zoltan. If you wish to use the parallel capabilities you will need to have these installed.
Here are videos of simulations made with PySPH.
PySPH is hosted on github. Please see the site for development details.
Features¶
- User scripts and equations are written in pure Python.
- Flexibility to define arbitrary SPH equations operating on particles.
- Ability to define your own multi-step integrators in pure Python.
- High-performance: our performance is comparable to hand-written solvers implemented in FORTRAN.
- Seamless multi-core support with OpenMP.
- Seamless GPU support with PyOpenCL.
- Seamless parallel integration using Zoltan.
- BSD license.
SPH formulations¶
Currently, PySPH has numerous examples to solve the viscous, incompressible Navier-Stokes equations using the weakly compressible (WCSPH) approach. The following formulations are currently implemented:
- Weakly Compressible SPH (WCSPH) for free-surface flows (Gesteira et al. 2010, Journal of Hydraulic Research, 48, pp. 6–27)

3D dam-break past an obstacle SPHERIC benchmark Test 2
- Transport Velocity Formulation for incompressilbe fluids (Adami et al. 2013, JCP, 241, pp. 292–307).
- SPH for elastic dynamics (Gray et al. 2001, CMAME, Vol. 190, pp 6641–6662)
- Compressible SPH (Puri et al. 2014, JCP, Vol. 256, pp 308–333)
- Generalized Transport Velocity Formulation (GTVF) (Zhang et al. 2017, JCP, 337, pp. 216–232)
- Entropically Damped Artificial Compressibility (EDAC) (Ramachandran et al. 2019, Computers and Fluids, 179, pp. 579–594)
- delta-SPH (Marrone et al. CMAME, 2011, 200, pp. 1526–1542)
- Dual Time SPH (DTSPH) (Ramachandran et al. arXiv preprint)
- Incompressible (ISPH) (Cummins et al. JCP, 1999, 152, pp. 584–607)
- Simple Iterative SPH (SISPH) (Muta et al. arXiv preprint)
- Implicit Incompressibel SPH (IISPH) (Ihmsen et al. 2014, IEEE Trans. Vis. Comput. Graph., 20, pp 426–435)
- Gudnov SPH (GSPH) (Inutsuka et al. JCP, 2002, 179, pp. 238–267)
- Conservative Reproducible Kernel SPH (CRKSPH) (Frontiere et al. JCP, 2017, 332, pp. 160–209)
- Approximate Gudnov SPH (AGSPH) (Puri et al. JCP, 2014, pp. 432–458)
- Adaptive Density Kernel Estimate (ADKE) (Sigalotti et al. JCP, 2006, pp. 124–149)
- Akinci (Akinci et al. ACM Trans. Graph., 2012, pp. 62:1–62:8)
Boundary conditions from the following papers are implemented:
- Generalized Wall BCs (Adami et al. JCP, 2012, pp. 7057–7075)
- Do nothing type outlet BC (Federico et al. European Journal of Mechanics - B/Fluids, 2012, pp. 35–46)
- Outlet Mirror BC (Tafuni et al. CMAME, 2018, pp. 604–624)
- Method of Characteristics BC (Lastiwaka et al. International Journal for Numerical Methods in Fluids, 2012, pp. 35–46)
- Hybrid BC (Negi et al. arXiv preprint)
Corrections proposed in the following papers are also the part for PySPH:
- Corrected SPH (Bonet et al. CMAME, 1999, pp. 97–115)
- hg-correction (Hughes et al. Journal of Hydraulic Research, pp. 105–117)
- Tensile instability correction’ (Monaghan J. J. JCP, 2000, pp. 2990–311)
- Particle shift algorithms (Xu et al. JCP, 2009, pp. 6703–6725), (Skillen et al. CMAME, 2013, pp. 163–173)
Surface tension models are implemented from:
- Morris surface tension (Morris et al. Internaltional Journal for Numerical Methods in Fluids, 2000, pp. 333–353)
- Adami Surface tension formulation (Adami et al. JCP, 2010, pp. 5011–5021)
Credits¶
PySPH is primarily developed at the Department of Aerospace Engineering, IIT Bombay. We are grateful to IIT Bombay for the support. Our primary goal is to build a powerful SPH-based tool for both application and research. We hope that this makes it easy to perform reproducible computational research.
To see the list of contributors the see github contributors page
Some earlier developers not listed on the above are:
- Pankaj Pandey (stress solver and improved load balancing, 2011)
- Chandrashekhar Kaushik (original parallel and serial implementation in 2009)
Research papers using PySPH¶
The following are some of the works that use PySPH,
- Adaptive SPH method: https://gitlab.com/pypr/adaptive_sph
- Adaptive SPH method applied to moving bodies: https://gitlab.com/pypr/asph_motion
- Convergence of the SPH method: https://gitlab.com/pypr/convergence_sph
- Corrected transport velocity formulation: https://gitlab.com/pypr/ctvf
- Dual-Time SPH method: https://gitlab.com/pypr/dtsph
- Entropically damped artificial compressibility SPH formulation: https://gitlab.com/pypr/edac_sph
- Generalized inlet and outlet boundary conditions for SPH: https://gitlab.com/pypr/inlet_outlet
- Method of manufactured solutions for SPH: https://gitlab.com/pypr/mms_sph
- A demonstration of the binder support provided by PySPH: https://gitlab.com/pypr/pysph_demo
- Manuscript and code for a paper on PySPH: https://gitlab.com/pypr/pysph_paper
- Simple Iterative Incompressible SPH scheme: https://gitlab.com/pypr/sisph
- Geometry generation and preprocessing for SPH simulations: https://gitlab.com/pypr/sph_geom
Citing PySPH¶
You may use the following article to formally refer to PySPH, a freely-available arXiv copy of the below paper is at https://arxiv.org/abs/1909.04504,
- Prabhu Ramachandran, Aditya Bhosale, Kunal Puri, Pawan Negi, Abhinav Muta, A. Dinesh, Dileep Menon, Rahul Govind, Suraj Sanka, Amal S Sebastian, Ananyo Sen, Rohan Kaushik, Anshuman Kumar, Vikas Kurapati, Mrinalgouda Patil, Deep Tavker, Pankaj Pandey, Chandrashekhar Kaushik, Arkopal Dutt, Arpit Agarwal. “PySPH: A Python-Based Framework for Smoothed Particle Hydrodynamics”. ACM Transactions on Mathematical Software 47, no. 4 (31 December 2021): 1–38. DOI: https://doi.org/10.1145/3460773.
The bibtex entry is::
@article{ramachandran2021a,
title = {{{PySPH}}: {{A Python-based Framework}} for {{Smoothed Particle Hydrodynamics}}},
shorttitle = {{{PySPH}}},
author = {Ramachandran, Prabhu and Bhosale, Aditya and Puri,
Kunal and Negi, Pawan and Muta, Abhinav and Dinesh,
A. and Menon, Dileep and Govind, Rahul and Sanka, Suraj and Sebastian,
Amal S. and Sen, Ananyo and Kaushik, Rohan and Kumar,
Anshuman and Kurapati, Vikas and Patil, Mrinalgouda and Tavker,
Deep and Pandey, Pankaj and Kaushik, Chandrashekhar and Dutt,
Arkopal and Agarwal, Arpit},
year = {2021},
month = dec,
journal = {ACM Transactions on Mathematical Software},
volume = {47},
number = {4},
pages = {1--38},
issn = {0098-3500, 1557-7295},
doi = {10.1145/3460773},
langid = {english}
}
The following are older presentations:
- Prabhu Ramachandran, PySPH: a reproducible and high-performance framework for smoothed particle hydrodynamics, In Proceedings of the 15th Python in Science Conference, pages 127–135, July 11th to 17th, 2016. Link to paper.
- Prabhu Ramachandran and Kunal Puri, PySPH: A framework for parallel particle simulations, In proceedings of the 3rd International Conference on Particle-Based Methods (Particles 2013), Stuttgart, Germany, 18th September 2013.
History¶
- 2009: PySPH started with a simple Cython based 1D implementation written by Prabhu.
- 2009-2010: Chandrashekhar Kaushik worked on a full 3D SPH implementation with a more general purpose design. The implementation was in a mix of Cython and Python.
- 2010-2012: The previous implementation was a little too complex and was largely overhauled by Kunal and Pankaj. This became the PySPH 0.9beta release. The difficulty with this version was that it was almost entirely written in Cython, making it hard to extend or add new formulations without writing more Cython code. Doing this was difficult and not too pleasant. In addition it was not as fast as we would have liked it. It ended up feeling like we might as well have implemented it all in C++ and exposed a Python interface to that.
- 2011-2012: Kunal also implemented SPH2D and another internal version called ZSPH in Cython which included Zoltan based parallelization using PyZoltan. This was specific to his PhD research and again required writing Cython making it difficult for the average user to extend.
- 2013-present In early 2013, Prabhu reimplemented the core of PySPH to be almost entirely auto-generated from pure Python. The resulting code was faster than previous implementations and very easy to extend entirely from pure Python. Kunal and Prabhu integrated PyZoltan into PySPH and the current version of PySPH was born. Subsequently, OpenMP support was also added in 2015.
Support¶
If you have any questions or are running into any difficulties with PySPH you can use the PySPH discussions to ask questions or look for answers.
Please also take a look at the PySPH issue tracker if you have bugs or issues to report.
You could also email or post your questions on the pysph-users mailing list here: https://groups.google.com/d/forum/pysph-users
Changelog¶
1.0b2¶
- Release date: Still under development.
1.0b1¶
Around 140 pull requests were merged. Thanks to all who contributed to this release (in alphabetical order): Abhinav Muta, Aditya Bhosale, Amal Sebastian, Ananyo Sen, Antonio Valentino, Dinesh Adepu, Jeffrey D. Daye, Navaneet, Miloni Atal, Pawan Negi, Prabhu Ramachandran, Rohan Kaushik, Tetsuo Koyama, and Yash Kothari.
- Release date: 1st March 2022.
- Enhancements:
- Use github actions for tests and also test OpenCL support on CI.
- Parallelize the build step of the octree NNPS on the CPU.
- Support for packing initial particle distributions.
- Add support for setting load balancing weights for particle arrays.
- Use meshio to read data and convert them into particles.
- Add support for conditional group of equations.
- Add options to control loop limits in a Group.
- Add
pysph binder
,pysph cull
, andpysph cache
. - Use OpenMP for initialize, loop and post_loop.
- Added many SPH schemes: CRKSPH, SISPH, basic ISPH, SWE, TSPH, PSPH.
- Added a mirror boundary condition along coordinate axes.
- Add support for much improved inlets and outlets.
- Add option
--reorder-freq
to turn on spatial reordering of particles. - API: Integrators explicitly call update_domain.
- Basic CUDA support.
- Many important improvements to the pysph Mayavi viewer.
- Many improvements to the 3D and 2D jupyter viewer.
Application.customize_output
can be used to customize viewer.- Use
~/.compyle/config.py
for user customizations. - Remove pyzoltan, cyarray, and compyle into their own packages on pypi.
- Bug fixes:
- Fix issue with update_nnps being called too many times when set for a group.
- Many OpenCL related fixes and improvements.
- Fix bugs in the parallel manager code and add profiling information.
- Fix hdf5 compressed output.
- Fix
pysph dump_vtk
- Many fixes to various schemes.
- Fix memory leak with the neighbor caching.
- Fix issues with using PySPH on FreeBSD.
1.0a6¶
90 pull requests were merged for this release. Thanks to the following who contributed to this release (in alphabetical order): A Dinesh, Abhinav Muta, Aditya Bhosale, Ananyo Sen, Deep Tavker, Prabhu Ramachandran, Vikas Kurapati, nilsmeyerkit, Rahul Govind, Sanka Suraj.
- Release date: 26th November, 2018.
- Enhancements:
- Initial support for transparently running PySPH on a GPU via OpenCL.
- Changed the API for how adaptive DT is computed, this is now to be set in
the particle array properties called
dt_cfl, dt_force, dt_visc
. - Support for non-pairwise particle interactions via the
loop_all
method. This is useful for MD simulations. - Add support for
py_stage1, py_stage2 ...
, methods in the integrator. - Add support for
py_initialize
andinitialize_pair
in equations. - Support for using different sets of equations for different stages of the integration.
- Support to call arbitrary Python code from a
Group
via thepre/post
callback arguments. - Pass
t, dt
to the reduce method. - Allow particle array properties to have strides, this allows us to define properties with multiple components. For example if you need 3 values per particle, you can set the stride to 3.
- Mayavi viewer can now show non-real particles also if saved in the output.
- Some improvements to the simple remesher of particles.
- Add simple STL importer to import geometries.
- Allow user to specify openmp schedule.
- Better documentation on equations and using a different compiler.
- Print convenient warning when particles are diverging or if
h, m
are zero. - Abstract the code generation into a common core which supports Cython, OpenCL and CUDA. This will be pulled into a separate package in the next release.
- New GPU NNPS algorithms including a very fast oct-tree.
- Added several sphysics test cases to the examples.
- Schemes:
- Add a working Implicit Incompressible SPH scheme (of Ihmsen et al., 2014)
- Add GSPH scheme from SPH2D and all the approximate Riemann solvers from there.
- Add code for Shepard and MLS-based density corrections.
- Add kernel corrections proposed by Bonet and Lok (1999)
- Add corrections from the CRKSPH paper (2017).
- Add basic equations of Parshikov (2002) and Zhang, Hu, Adams (2017)
- Bug fixes:
- Ensure that the order of equations is preserved.
- Fix bug with dumping VTK files.
- Fix bug in Adami, Hu, Adams scheme in the continuity equation.
- Fix mistake in WCSPH scheme for solid bodies.
- Fix bug with periodicity along the z-axis.
1.0a5¶
- Release date: 17th September, 2017
- Mayavi viewer now supports empty particle arrays.
- Fix error in scheme chooser which caused problems with default scheme property values.
- Add starcluster support/documentation so PySPH can be easily used on EC2.
- Improve the particle array so it automatically ravel’s the passed arrays and also accepts constant values without needing an array each time.
- Add a few new examples.
- Added 2D and 3D viewers for Jupyter notebooks.
- Add several new Wendland Quintic kernels.
- Add option to measure coverage of Cython code.
- Add EDAC scheme.
- Move project to github.
- Improve documentation and reference section.
- Fix various bugs.
- Switch to using pytest instead of nosetests.
- Add a convenient geometry creation module in
pysph.tools.geometry
- Add support to script the viewer with a Python file, see
pysph view -h
. - Add several new NNPS schemes like extended spatial hashing, SFC, oct-trees etc.
- Improve Mayavi viewer so one can view the velocity vectors and any other vectors.
- Viewer now has a button to edit the visualization properties easily.
- Add simple tests for all available kernels. Add
SuperGaussian
kernel. - Add a basic dockerfile for pysph to help with the CI testing.
- Update build so pysph can be built with a system zoltan installation that is
part of trilinos using the
USE_TRILINOS
environment variable. - Wrapping the
Zoltan_Comm_Resize
function inpyzoltan
.
1.0a4¶
- Release date: 14th July, 2016.
- Improve many examples to make it easier to make comparisons.
- Many equation parameters no longer have defaults to prevent accidental errors from not specifying important parameters.
- Added support for
Scheme
classes that manage the generation of equations and solvers. A user simply needs to create the particles and setup a scheme with the appropriate parameters to simulate a problem. - Add support to easily handle multiple rigid bodies.
- Add support to dump HDF5 files if h5py is installed.
- Add support to directly dump VTK files using either Mayavi or PyVisfile,
see
pysph dump_vtk
- Improved the nearest neighbor code, which gives about 30% increase in performance in 3D.
- Remove the need for the
windows_env.bat
script on Windows. This is automatically setup internally. - Add test that checks if all examples run.
- Remove unused command line options and add a
--max-steps
option to allow a user to run a specified number of iterations. - Added Ghia et al.’s results for lid-driven-cavity flow for easy comparison.
- Added some experimental results for the dam break problem.
- Use argparse instead of optparse as it is deprecated in Python 3.x.
- Add
pysph.tools.automation
to facilitate easier automation and reproducibility of PySPH simulations. - Add spatial hash and extended spatial hash NNPS algorithms for comparison.
- Refactor and cleanup the NNPS related code.
- Add several gas-dynamics examples and the
ADEKEScheme
. - Work with mpi4py version 2.0.0 and older versions.
- Fixed major bug with TVF implementation and add support for 3D simulations with the TVF.
- Fix bug with uploaded tarballs that breaks
pip install pysph
on Windows. - Fix the viewer UI to continue playing files when refresh is pushed.
- Fix bugs with the timestep values dumped in the outputs.
- Fix floating point issues with timesteps, where examples would run a final extremely tiny timestep in order to exactly hit the final time.
1.0a3¶
- Release date: 18th August, 2015.
- Fix bug with
output_at_times
specification for solver. - Put generated sources and extensions into a platform specific directory in
~/.pysph/sources/<platform-specific-dir>
to avoid problems with multiple Python versions, operating systems etc. - Use locking while creating extension modules to prevent problems when multiple processes generate the same extesion.
- Improve the
Application
class so users can subclass it to create examples. The users can also add their own command line arguments and add pre/post step/stage callbacks by creating appropriate methods. - Moved examples into the
pysph.examples
. This makes the examples reusable and easier to run as installation of pysph will also make the examples available. The examples also perform the post-processing to make them completely self-contained. - Add support to write compressed output.
- Add support to set the kernel from the command line.
- Add a new
pysph
script that supportsview
,run
, andtest
sub-commands. Thepysph_viewer
is now removed, usepysph view
instead. - Add a simple remeshing tool in
pysph.solver.tools.SimpleRemesher
. - Cleanup the symmetric eigenvalue computing routines used for solid mechanics problems and allow them to be used with OpenMP.
- The viewer can now view the velocity magnitude (
vmag
) even if it is not present in the data. - Port all examples to use new
Application
API. - Do not display unnecessary compiler warnings when there are no errors but display verbose details when there is an error.
1.0a2¶
- Release date: 12th June, 2015
- Support for tox, this makes it trivial to test PySPH on py26, py27 and py34 (and potentially more if needed).
- Fix bug in code generator where it is unable to import pysph before it is installed.
- Support installation via
pip
by allowingegg_info
to be run without cython or numpy. - Added Codeship CI build using tox for py27 and py34.
- CI builds for Python 2.7.x and 3.4.x.
- Support for Python-3.4.x.
- Support for Python-2.6.x.
1.0a1¶
- Release date: 3rd June, 2015.
- First public release of the new PySPH code which uses code-generation and is hosted on bitbucket.
- OpenMP support.
- MPI support using Zoltan.
- Automatic code generation from high-level Python code.
- Support for various multi-step integrators.
- Added an interpolator utility module that interpolates the particle data onto a desired set of points (or grids).
- Support for inlets and outlets.
- Support for basic Gmsh input/output.
- Plenty of examples for various SPH formulations.
- Improved documentation.
- Continuous integration builds on Shippable, Drone.io, and AppVeyor.
Installation and getting started¶
Installation and getting started¶
To install PySPH, you need a working Python environment with the required dependencies installed. You may use any of the available Python distributions. PySPH is currently tested with Python 3.x. If you are new to Python we recommend EDM or Anaconda. PySPH will work fine with miniconda, Anaconda or other environments like WinPython. The following instructions should help you get started.
Since there is a lot of information here, we suggest that you skim the section on Quick installation, Dependencies and then directly jump to one of the “Installing the dependencies on xxx” sections below depending on your operating system. If you need to use MPI please do go through Installation with MPI first though.
Depending on your chosen Python distribution, simply follow the instructions and links referred therein.
- Quick installation
- Installation with MPI
- Using the configuration file
- Dependencies
- Installing the dependencies on GNU/Linux
- Installing the dependencies on Ubuntu 18.04
- Installing the dependencies on Mac OS X
- Installing the dependencies on Windows
- Using a virtualenv for PySPH
- Downloading PySPH
- Building and Installing PySPH
- Issues with the pip cache
- Running the tests
- Running the examples
- Possible issues with the viewer
Quick installation¶
If you are reasonably experienced with installing Python packages, already have a C++ compiler setup on your machine, and are not immediately interested in running PySPH on multiple CPUs (using MPI), then installing PySPH is simple. Simply running pip like so:
$ pip install PySPH
should do the trick. You may do this in a virtualenv if you chose to. The important examples are packaged with the sources, you should be able to run those immediately. If you wish to download the sources and explore them, you can download the sources either using the tarball/ZIP or from git, see Downloading PySPH. If you need MPI support you should first read Installation with MPI.
The above will install the latest released version of PySPH, you can install the development version using:
$ pip install https://github.com/pypr/pysph/zipball/master
If you wish to track the development of the package, clone the repository (as described in Downloading PySPH and do the following:
$ pip install -r requirements.txt
$ python setup.py develop
The following instructions are more detailed and also show how optional dependencies can be installed. Instructions on how to set things up on Windows is also available below.
If you are running into strange issues when you are setting up an installation with ZOLTAN, see here, Issues with the pip cache.
Installation with MPI¶
These are the big picture instructions for installation with MPI. This can be tricky since MPI is often very tuned to the specific hardware you are using. For example on large HPC clusters, different flavors of highly optimized MPI libraries are made available. These require different compilation and link flags and often different compilers are available as well.
In addition to this, the Python package installer pip tries to build wheels in an isolated environment by default. This is a problem when installing packages which use libraries like MPI. Our recommendations and notes here are so you understand what is going on.
The first thing you will need to do is install mpi4py and test that it works
well. Read the documentation so your mpi4py is suitably configured for your
hardware and works correctly. You will then need to install PyZoltan which
requires that the Zoltan library be installed. The installation instructions
are available in the PyZoltan documentation but you must ensure that you either
install it from source using python setup.py install
or python setup.py
develop
or if you install it with pip you can do this:
pip install pyzoltan --no-build-isolation
This shuts of pip’s default build isolation so it picks up your installed version of mpi4py. Once this is installed you can install pysph using:
pip install pysph --no-build-isolation
Basically, if you use pip with MPI support you will need to turn off its
default build isolation. OTOH, you do not need to do anything special if you
install using python setup.py install
.
Finally, given that custom MPI environments require custom compile/link flags you may find it worthwhile using a configuration file to set these up for both PyZoltan and PySPH as discussed in Using the configuration file.
Using the configuration file¶
Instead of setting environment variables and build options on the shell you can have them setup using a simple configuration file. This is the same as that described in the PyZoltan documentation and is entirely optional but if you are customizing your builds for MPI, this may be very useful.
The file is located in ~/.compyle/config.py
(we use the same file for
compyle and PyZoltan). Here ~
is your home directory which on Linux is
/home/username
, on MacOS /Users/username
and on Windows the location
is likely \Users\username
. This file is executed and certain options may
be set there.
For example if you wish to set the appropriate C and C++ compiler (icc, Cray,
or PGI), you may set the CC
and CXX
environment variables. You could
do this in the ~/.compyle/config.py
:
import os
os.environ['CC'] = 'cc'
os.environ['CXX'] = 'CC'
The above are for a Cray system. You may also setup custom OpenMP related flags. For example, on a Cray system you may do the following:
OMP_CFLAGS = ['-homp']
OMP_LINK = ['-homp']
The OMP_CFLAGS
and OMP_LINK
parameters should be lists.
The MPI and ZOLTAN specific options are:
MPI_CFLAGS = ['...'] # must be a list.
MPI_LINK = ['...']
# Zoltan options
USE_TRILINOS = 1 # When set to anything, use "-ltrilinos_zoltan".
ZOLTAN = '/path/to_zoltan' # looks inside this for $ZOLTAN/include/, lib/
# Not needed if using ZOLTAN
ZOLTAN_INCLUDE = 'path/include' # path to zoltan.h
ZOLTAN_LIBRARY = 'path/lib' # path to libzoltan.a
Note that the above just lists all the different options. You do not need to set them all, only use those that you need, if the defaults work for you.
Dependencies¶
Core dependencies¶
The core dependencies are:
The project’s requirements.txt lists all the required core dependencies.
These packages can be installed from your Python distribution’s package manager, or using pip. For more detailed instructions on how to do this for different distributions, see below.
Running PySPH requires a working C/C++ compiler on your machine. On Linux/OS X the gcc toolchain will work well. On Windows, you will need to have a suitable MSVC compiler installed, see https://wiki.python.org/moin/WindowsCompilers for specific details.
On Python 2.7 for example, you will need Microsoft Visual C++ Compiler for Python 2.7 or an equivalent compiler. More details are available below.
Note
PySPH generates high-performance code and compiles it on the fly. This requires a working C/C++ compiler even after installing PySPH.
Optional dependencies¶
The optional dependencies are:
- OpenMP: PySPH can use OpenMP if it is available. Installation instructions are available below.
- PyOpenCL: PySPH can use OpenCL if it is available. This requires installing PyOpenCL.
- PyCUDA: PySPH can use CUDA if it is available. This requires installing PyCUDA.
- Mayavi: PySPH provides a convenient viewer to visualize the output of simulations. This viewer can be launched using the command
pysph view
and requires Mayavi to be installed. Since this is only a viewer it is optional for use, however, it is highly recommended that you have it installed as the viewer is very convenient.- mpi4py and Zoltan_: If you want to use PySPH in parallel, you will need mpi4py and the Zoltan_ data management library along with the PyZoltan package. PySPH will work in serial without mpi4py or Zoltan_. Simple build instructions for Zoltan are included below but please do go through the Installation with MPI section to get an overview.
Mayavi is packaged with all the major distributions and is easy to install. Zoltan_ is very unlikely to be already packaged and will need to be compiled.
Building and linking PyZoltan on OSX/Linux¶
If you want to use PySPH in parallel you will need to install PyZoltan. PyZoltan requires the Zoltan library to be available. We’ve provided a simple Zoltan build script in the PyZoltan repository. This works on Linux and OS X but not on Windows. It can be used as:
$ ./build_zoltan.sh $INSTALL_PREFIX
where the $INSTALL_PREFIX
is where the library and includes will be
installed (remember, this script is in the PyZoltan repository and not in
PySPH). You may edit and tweak the build to suit your installation. However,
this script is what we use to build Zoltan on our continuous integration
servers on Travis-CI and Shippable.
After Zoltan is build, set the environment variable ZOLTAN
to point to the
$INSTALL_PREFIX
that you used above:
$ export ZOLTAN=$INSTALL_PREFIX
Note that replace $INSTALL_PREFIX
with the directory you specified above.
After this, follow the instructions to build PyZoltan. The PyZoltan wrappers
will be compiled and available.
Now, when you build PySPH, it too needs to know where to link to Zoltan and
you should keep the ZOLTAN
environment variable set. This is only needed
until PySPH is compiled, thereafter we do not need the environment variable.
If you are running into strange issues when you are setting up pysph with ZOLTAN, see here, Issues with the pip cache.
Note
The installation will use $ZOLTAN/include
and $ZOLTAN/lib
to find
the actual directories, if these do not work for your particular
installation for whatever reason, set the environment variables
ZOLTAN_INCLUDE
and ZOLTAN_LIBRARY
explicitly without setting up
ZOLTAN
. If you used the above script, this would be:
$ export ZOLTAN_INCLUDE=$INSTALL_PREFIX/include
$ export ZOLTAN_LIBRARY=$INSTALL_PREFIX/lib
If Zoltan can be installed through your distro’s package manager or using alternate tools, it is not mandatory to use the provided zoltan build script.
For example, if you are on Arch or an Arch-based distro, this can be accomplished using zoltan or trilinos from AUR. Then, the environment variables should set as:
$ export ZOLTAN_INCLUDE=/usr/include
$ export ZOLTAN_LIBRARY=/usr/lib
Similarly, for Ubuntu, see Installing the dependencies on Ubuntu 18.04.
By the way, you may also set these in the configuration file described in Using the configuration file.
Installing the dependencies on GNU/Linux¶
If you are using EDM or Anaconda the instructions in the section Installing the dependencies on Mac OS X will be useful as the instructions are the same. The following are for the case where you wish to use the native Python packages distributed with the Linux distribution you are using.
If you are running into trouble, note that it is very easy to install using EDM (see Using EDM) or conda (see Using Anaconda) and you may make your lives easier going that route.
GNU/Linux is probably the easiest platform to install PySPH. On Ubuntu one may install the dependencies using:
$ sudo apt-get install build-essential python-dev python-numpy \
python-mako cython python-pytest mayavi2 python-qt4 python-virtualenv
OpenMP is typically available but if it is not, it can be installed with:
$ sudo apt-get install libomp-dev
If you need parallel support:
$ sudo apt-get install libopenmpi-dev python-mpi4py
$ ./build_zoltan.sh ~/zoltan # Replace ~/zoltan with what you want
$ export ZOLTAN=~/zoltan
On Linux it is probably best to install PySPH into its own virtual environment. This will allow you to install PySPH as a user without any superuser priviledges. See the section below on Using a virtualenv for PySPH. In short do the following:
$ virtualenv --system-site-packages pysph_env
$ source pysph_env/bin/activate
$ pip install cython --upgrade # if you have an old version.
If you wish to use a compiler which is not currently your default compiler,
simply update the CC
and CXX
environment variables. For example, to use
icc run the following commands before building PySPH:
$ export CC=icc
$ export CXX=icpc
Note
In this case, you will additionally have to ensure that the relevant intel
shared libraries can be found when running PySPH code. Most intel
installations come along with shell scripts that load relevant environment
variables with the right values automatically. This shell script is
generally named compilervars.sh
and can be found in
/path/to/icc/bin
. If you didn’t get this file along with your
installation, you can try running export
LD_LIBRARY_PATH=/path/to/icc/lib
.
Note that you may also set the configuration options in the configuration file described in Using the configuration file.
You should be set now and should skip to Downloading PySPH and Building and Installing PySPH.
On recent versions of Ubuntu (16.10 and 18.04) there may be problems with
Mayavi viewer, and pysph view
may not work correctly. To see how to
resolve these, please look at Possible issues with the viewer.
Note
If you wish to see a working build/test script please see our shippable.yml.
Installing the dependencies on Ubuntu 18.04¶
On Ubuntu 18.04 it should be relatively simple to install PySPH with ZOLTAN as follows:
# For OpenMP
$ sudo apt-get install libomp-dev
# For Zoltan
$ sudo apt-get install openmpi-bin libopenmpi-dev libtrilinos-zoltan-dev
$ export ZOLTAN_INCLUDE=/usr/include/trilinos
$ export ZOLTAN_LIBRARY=/usr/lib/x86_64-linux-gnu
$ export USE_TRILINOS=1
You may also set these options in the configuration file described in Using the configuration file.
Now depending on your setup you can install the Python related dependencies. For example with conda you can do:
$ conda install -c conda-forge cython mako matplotlib jupyter pyside pytest \
mock meshio pytools
$ conda install -c conda-forge mpi4py
Then you should be able to install pyzoltan and its dependency cyarray using:
$ pip install pyzoltan --no-build-isolation
Finally, install PySPH with
$ pip install pysph --no-build-isolation
Or with:
$ pip install --no-cache-dir --no-build-isolation pysph
If you are having trouble due to pip’s cache as discussed in Issues with the pip cache.
You should be all set now and should next consider Running the tests.
Note
The --no-build-isolation
argument to pip is necessary for without
it, pip will attempt to create an isolated environment and build a pyzoltan
wheel inside that isolated environment. This will mean that it will not see
mpi4py that you have built and installed. This could end up causing all
sorts of problems especially if you have a custom MPI library.
Installing the dependencies on Mac OS X¶
On OS X, your best bet is to install EDM, or Anaconda or some other Python distribution. Ensure that you have gcc or clang installed by installing XCode. See this if you installed XCode but can’t find clang or gcc.
If you are getting strange errors of the form:
lang: warning: libstdc++ is deprecated; move to libc++ with a minimum deployment target of OS X 10.9 [-Wdeprecated]
ld: library not found for -lstdc++
clang: error: linker command failed with exit code 1 (use -v to see invocation)
Then try this (on a bash shell):
$ export MACOSX_DEPLOYMENT_TARGET=10.9
And run your command again (replace the above with a suitable line on other
shells). This is necessary because your Python was compiled with an older
deployment target and the current version of XCode that you have installed is
not compatible with that. By setting the environment variable you allow
compyle to use a newer version. If this works, it is a good idea to set this
in your default environment (.bashrc
for bash shells) so you do not have
to do this every time.
OpenMP on OSX¶
The default clang compiler available on MacOS uses an LLVM backend and does not support OpenMP. There are two ways to support OpenMP. The first involves installing the OpenMP support for clang. This can be done with brew using:
$ brew install libomp
Once that is done, it should “just work”. If you get strange errors, try
setting the MACOSX_DEPLOYMENT_TARGET
as shown above.
Another option is to install GCC for MacOS available on brew using
$ brew install gcc
Once this is done, you need to use this as your default compiler. The gcc
formula on brew currently ships with gcc version 9. Therefore, you can
tell Python to use the GCC installed by brew by setting:
$ export CC=gcc-9
$ export CXX=g++-9
Note that you still do need to have the command-line-tools for XCode
installed, otherwise the important header files are not available. See
how-to-install-xcode-command-line-tools
for more details. You may also want to set these environment variables in your
.bashrc
so you don’t have to do this every time.
Once you do this, compyle will automatically use this version of GCC and will also work with OpenMP. Note that on some preliminary benchmarks, GCC’s OpenMP implementation seems about 10% or so faster than the LLVM version. Your mileage may vary.
Using EDM¶
It is very easy to install all the dependencies with the Enthought Deployment Manager (EDM).
Download the EDM installer if you do not already have it installed. Install the appropriate installer package for your system.
Once you have installed EDM, run the following:
$ edm install mayavi pyside cython matplotlib jupyter pytest mock pip $ edm shell $ pip install mako
With this done, you should be able to install PySPH relatively easily, see Building and Installing PySPH.
Using Anaconda¶
After installing Anaconda or miniconda, you will need to make sure the dependencies are installed. You can create a separate environment as follows:
$ conda create -n pysph_env
$ source activate pysph_env
Now you can install the necessary packages:
$ conda install -c conda-forge cython mako matplotlib jupyter pyside pytest mock
$ conda install -c menpo mayavi
If you need parallel support, please see Installing mpi4py and Zoltan on OS X, otherwise, skip to Downloading PySPH and Building and Installing PySPH.
Installing mpi4py and Zoltan on OS X¶
In order to build/install mpi4py one first has to install the MPI library.
This is easily done with Homebrew as follows (you need to have brew
installed for this but that is relatively easy to do):
$ sudo brew install open-mpi
After this is done, one can install mpi4py by hand. First download mpi4py from here. Then run the following (modify these to suit your XCode installation and version of mpi4py):
$ cd /tmp
$ tar xvzf ~/Downloads/mpi4py-1.3.1.tar.gz
$ cd mpi4py-1.3.1
$ export MACOSX_DEPLOYMENT_TARGET=10.7
$ export SDKROOT=/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.7.sdk/
$ python setup.py install
Change the above environment variables to suite your SDK version. If this installs correctly, mpi4py should be available.
You can then follow the instructions on how to build/install Zoltan and
PyZoltan given above. You should be set now and should move to
Building and Installing PySPH. Just make sure you have set the ZOLTAN
environment
variable so PySPH knows where to find it.
Installing the dependencies on Windows¶
While it should be possible to use mpi4py and Zoltan on Windows, we do not at this point have much experience with this. Feel free to experiment and let us know if you’d like to share your instructions. The following instructions are all without parallel support.
Using EDM¶
It is very easy to install all the dependencies with the Enthought Deployment Manager (EDM).
Download the EDM installer if you do not already have it installed. Install the appropriate installer package for your system.
Once you have installed EDM, run the following:
> edm install mayavi pyside cython matplotlib jupyter pytest mock pip > edm shell > pip install mako
Once you are done with this, please skip ahead to Installing Visual C++ Compiler for Python.
Using WinPython¶
Instead of Anaconda you could try WinPython 2.7.x.x. To obtain the core dependencies, download the corresponding binaries from Christoph Gohlke’s Unofficial Windows Binaries for Python Extension Packages. Mayavi is available through the binary ETS.
You can now add these binaries to your WinPython installation by going to WinPython Control Panel. The option to add packages is available under the section Install/upgrade packages.
Make sure to set your system PATH variable pointing to the location of the
scripts as required. If you have installed WinPython 2.7.6 64-bit, make sure
to set your system PATH variables to <path to installation
folder>/python-2.7.6.amd64
and <path to installation
folder>/python-2.7.6.amd64/Scripts/
.
Once you are done with this, please skip ahead to Installing Visual C++ Compiler for Python.
Using Anaconda¶
Install Anaconda for your platform, make it the default and then install the required dependencies:
$ conda install cython mayavi
$ pip install mako
Once you are done with this, please skip ahead to Installing Visual C++ Compiler for Python.
Installing Visual C++ Compiler for Python¶
For all of the above Python distributions, it is highly recommended that you build PySPH with Microsoft’s Visual C++ for Python. See see https://wiki.python.org/moin/WindowsCompilers for specific details for each version of Python. Note that different Python versions may have different compiler requirements.
On Python 3.6 and above you should use Microsoft’s Build Tools for Visual Studio 2017.
On Python 2.7 for example use Microsoft’s Visual C++ for Python 2.7. We
recommend that you download and install the VCForPython27.msi
available
from the link. Make sure
you install the system requirements specified on that page. For example, you
will need to install the Microsoft Visual C++ 2008 SP1 Redistributable Package
for your platform (x86 for 32 bit or x64 for 64 bit) and on Windows 8 and
above you will need to install the .NET framework 3.5. Please look at the link
given above, it should be fairly straightforward. Note that doing this will
also get OpenMP working for you.
After you do this, you will find a “Microsoft Visual C++ Compiler Package for Python” in your Start menu. Choose a suitable command prompt from this menu for your architecture and start it (we will call this the MSVC command prompt). You may make a short cut to it as you will need to use this command prompt to build PySPH and also run any of the examples.
After this is done, see section Downloading PySPH and get a copy of PySPH. Thereafter, you may follow section Building and Installing PySPH.
Warning
On 64 bit Windows, do not build PySPH with mingw64 as it does not work reliably at all and frequently crashes. YMMV with mingw32 but it is safer and just as easy to use the MS VC++ compiler.
Using a virtualenv for PySPH¶
A virtualenv allows you to create an isolated environment for PySPH and its related packages. This is useful in a variety of situations.
- Your OS does not provide a recent enough Cython version (say you are running Debian stable).
- You do not have root access to install any packages PySPH requires.
- You do not want to mess up your system files and wish to localize any installations inside directories you control.
- You wish to use other packages with conflicting requirements.
- You want PySPH and its related packages to be in an “isolated” environment.
You can either install virtualenv (or ask your system administrator to) or
just download the virtualenv.py script and use
it (run python virtualenv.py
after you download the script).
Create a virtualenv like so:
$ virtualenv --system-site-packages pysph_env
This creates a directory called pysph_env
which contains all the relevant
files for your virtualenv, this includes any new packages you wish to install
into it. You can delete this directory if you don’t want it anymore for some
reason. This virtualenv will also “inherit” packages from your system. Hence
if your system administrator already installed NumPy it may be imported from
your virtual environment and you do not need to install it. This is
very useful for large packages like Mayavi, Qt etc.
Note
If your version of virtualenv
does not support the
--system-site-packages
option, please use the virtualenv.py
script
mentioned above.
Once you create a virtualenv you can activate it as follows (on a bash shell):
$ source pysph_env/bin/activate
On Windows you run a bat file as follows:
$ pysph_env/bin/activate
This sets up the PATH to point to your virtualenv’s Python. You may now run any normal Python commands and it will use your virtualenv’s Python. For example you can do the following:
$ virtualenv myenv
$ source myenv/bin/activate
(myenv) $ pip install Cython mako pytest
(myenv) $ cd pysph
(myenv) $ python setup.py install
Now PySPH will be installed into myenv
. You may deactivate your
virtualenv using the deactivate
command:
(myenv) $ deactivate
$
On Windows, use myenv\Scripts\activate.bat
and
myenv\Scripts\deactivate.bat
.
If for whatever reason you wish to delete myenv
just remove the entire
directory:
$ rm -rf myenv
Note
With a virtualenv, one should be careful while running things like
ipython
or pytest
as these are sometimes also installed on the
system in /usr/bin
. If you suspect that you are not running the
correct Python, you could simply run (on Linux/OS X):
$ python `which ipython`
to be absolutely sure.
Downloading PySPH¶
One way to install PySPH is to use pip
$ pip install PySPH
This will install PySPH, and you should be able to import it and use the modules with your Python scripts that use PySPH. This will also provide the standard set of PySPH examples. If you want to take a look at the PySPH sources you can get it from git or download a tarball or ZIP as described below.
To get PySPH using git type the following
$ git clone https://github.com/pypr/pysph.git
If you do not have git or do not wish to bother with it, you can get a ZIP or tarball from the pysph site. You can unzip/untar this and use the sources.
In the instructions, we assume that you have the pysph sources in the
directory pysph
and are inside the root of this directory. For example:
$ unzip pysph-pysph-*.zip
$ cd pysph-pysph-1ce*
or if you cloned the repository:
$ git clone https://github.com/pypr/pysph.git
$ cd pysph
Once you have downloaded PySPH you should be ready to build and install it, see Building and Installing PySPH.
Building and Installing PySPH¶
Once you have the dependencies installed you can install PySPH with:
$ pip install PySPH
If you are going to be using PySPH with MPI support you will likely need to do:
$ pip install PySPH --no-build-isolation
You can install the development version using:
$ pip install https://github.com/pypr/pysph/zipball/master
If you downloaded PySPH using git or used a tarball you can do:
$ python setup.py install
You could also do:
$ python setup.py develop
This is useful if you are tracking the latest version of PySPH via git. With git you can update the sources and rebuild using:
$ git pull
$ python setup.py develop
You should be all set now and should next consider Running the tests.
Issues with the pip cache¶
Note that pip caches any packages it has built and installed earlier. So if you installed PySPH without Zoltan support, say and then uninstalled PySPH using:
$ pip uninstall pysph
then if you try a pip install pysph
again (and the PySPH version has not
changed), pip will simply re-use the old build it made. You do not want this
and want it to re-build PySPH to use ZOLTAN say, then you can do the
following:
$ pip install --no-cache-dir --no-build-isolation pysph
In this case, pip will disregard its default cache and freshly download and build PySPH. This is often handy.
Running the tests¶
Once you install PySPH you can run the tests using the pysph
script
that is installed:
$ pysph test
If you see errors while running the tests, you might want more verbose reporting which you can get with:
$ pysph test -v
This should run all the tests that do not take a long while to complete. If this fails, please contact the pysph-users mailing list or send us email.
There are a few additional test dependencies that need to be installed when running the tests. These can be installed using:
$ pip install -r requirements-test.txt
Once you run the tests, you should see the section on Running the examples.
Note
Internally, we use the pytest
package to run the tests.
For more information on what you can do with the pysph
script try
this:
$ pysph -h
Running the examples¶
You can verify the installation by exploring some examples. The examples are
actually installed along with the PySPH library in the pysph.examples
package. You can list and choose the examples to run by doing:
$ pysph run
This will list all the available examples and allow you to run any of them. If
you wish to run a particular one, like say elliptical_drop
, you may do:
$ pysph run elliptical_drop
This can also be run as:
$ pysph run pysph.examples.elliptical_drop
To see the options available, try this:
$ pysph run elliptical_drop -h
Note
Technically you can run the examples using python -m
pysph.examples.elliptical_drop
. The pysph run
command is a
lot more convenient as it allows a much shorter command
You can view the data generated by the simulation (after the simulation
is complete or during the simulation) by running pysph view
command.
To view the simulated data you may do:
$ pysph view elliptical_drop_output
If you have Mayavi installed this should show a UI that looks like:

If the viewer does not start, you may want to see Possible issues with the viewer.
There are other examples that use the transport velocity formulation:
$ pysph run cavity
This runs the driven cavity problem using the transport velocity formulation
of Adami et al. The example also performs post-processing of the results and
the cavity_output
will contain a few PNG images with these. You may view
these results using pysph view cavity_output
.
For example for
example the file streamlines.png
may look like what is shown below:

If you want to use PySPH for elastic dynamics, you can try some of the examples from Gray et al., Comput. Methods Appl. Mech. Engrg. 190 (2001), 6641-6662:
$ pysph run solid_mech.rings
Which runs the problem of the collision of two elastic rings. View the results like so:
$ pysph view rings_output
This should produce something that may look like the image below.

The auto-generated high-performance code for the example resides in the
directory ~/.pysph/source
. A note of caution however, it’s not for the
faint hearted.
Running the examples with OpenMP¶
If you have OpenMP available run any of the examples as follows:
$ pysph run elliptical_drop --openmp
This should run faster if you have multiple cores on your machine. If you wish to change the number of threads to run simultaneously, you can try the following:
$ OMP_NUM_THREADS=8 pysph run elliptical_drop --openmp
You may need to set the number of threads to about 4 times the number of physical cores on your machine to obtain the most scale-up. If you wish to time the actual scale up of the code with and without OpenMP you may want to disable any output (which will be serial), you can do this like:
$ pysph run elliptical_drop --disable-output --openmp
Note that one may run example scripts directly with Python but this
requires access to the location of the script. For example, if a script
pysph_script.py
exists one can run it as:
$ python pysph_script.py
The pysph run
command is just a convenient way to run the
pre-installed examples that ship with PySPH.
Running the examples with OpenCL¶
If you have PyOpenCL installed and working with an appropriate device setup, then you can transparently use OpenCL as well with PySPH. This feature is very new and still fairly experimental. You may run into issues but using it is simple. You may run any of the supported examples as follows:
$ pysph run elliptical_drop --opencl
Yes, thats it, just use the --opencl
option and code will be
auto-generated and run for you. By default it uses single-precision but you
can also run the code with double precision using:
$ pysph run elliptical_drop --opencl --use-double
Currently inlets and outlets are not supported, periodicity is slow and many optimizations still need to be made but this is rapidly improving. If you want to see an example that runs pretty fast, try the cube example:
$ pysph run cube --disable-output --np 1e6 --opencl
You may compare the execution time with that of OpenMP.
Running the examples with MPI¶
If you compiled PySPH with Zoltan_ and have mpi4py installed you may run any
of the examples with MPI as follows (here we choose 4 processors with
--np 4
, change this to suit your needs):
$ mpirun -np 4 pysph run dam_break_3d
This may not give you significant speedup if the problem is too small. You can also combine OpenMP and MPI if you wish. You should take care to setup the MPI host information suitably to utilize the processors effectively.
Note
Note that again we are using pysph run
here but for any other
scripts, one could do mpirun -np python some_script.py
Possible issues with the viewer¶
Often users are able to install PySPH and run the examples but are unable to
run pysph view
for a variety of reasons. This section discusses how these
could be resolved.
The PySPH viewer uses Mayavi. Mayavi can be installed via pip. Mayavi depends on VTK which can also be installed via pip if your package manager does not have a suitable version.
If you are using Ubuntu 16.04 or 16.10 or a VTK version built with Qt5, it is possible that you will see a strange segmentation fault when starting the viewer. This is because Mayavi uses Qt4 and the VTK build has linked to Qt5. In these cases it may be best to use to use the latest VTK wheels that are now available on pypi. If you have VTK installed but you want a more recent version of Mayavi, you can always use pip to install Mayavi.
For the very specific case of Mayavi on Ubuntu 16.04 and its derivatives, you can use Ubuntu’s older VTK package like so:
$ sudo apt remove mayavi2 python-vtk6
$ sudo apt install python-vtk
$ pip install mayavi
What this does is to remove the system Mayavi and the VTK-6.x package which is linked to Qt5 and instead install the older python-vtk package. Then using pip to install Mayavi against this version of VTK. If the problem persists remember that by default pip caches any previous installations of Mayavi and you may need to install Mayavi like this:
$ pip --no-cache-dir install mayavi
If you are using EDM or Anaconda, things should work most of the time. However, there may be problems and in this case please report the issues to the pysph-users mailing list or send us email.
Learning the ropes¶
In the tutorials, we will introduce the PySPH framework in the context of the examples provided. Read this if you are a casual user and want to use the framework as is. If you want to add new functions and capabilities to PySPH, you should read The PySPH framework. If you are new to PySPH however, we highly recommend that you go through this document and the next tutorial (A more detailed tutorial).
Recall that PySPH is a framework for parallel SPH-like simulations in Python. The idea therefore, is to provide a user friendly mechanism to set-up problems while leaving the internal details to the framework. All examples follow the following steps:

The tutorials address each of the steps in this flowchart for problems with increasing complexity.
The first example we consider is a “patch” test for SPH formulations for incompressible fluids in elliptical_drop_simple.py. This problem simulates the evolution of a 2D circular patch of fluid under the influence of an initial velocity field given by:
The kinematical constraint of incompressibility causes the initially circular patch of fluid to deform into an ellipse such that the volume (area) is conserved. An expression can be derived for this deformation which makes it an ideal test to verify codes.
Imports¶
Taking a look at the example (see elliptical_drop_simple.py), the first several lines are imports of various modules:
from numpy import ones_like, mgrid, sqrt
from pysph.base.utils import get_particle_array
from pysph.solver.application import Application
from pysph.sph.scheme import WCSPHScheme
Note
This is common for most examples and it is worth noting the pattern of the
PySPH imports. Fundamental SPH constructs like the kernel and particle
containers are imported from the base
subpackage. The framework
related objects like the solver and integrator are imported from the
solver
subpackage. Finally, we import from the sph
subpackage, the
physics related part for this problem.
The organization of the pysph
package is given below.
Organization of the pysph
package¶
PySPH is organized into several sub-packages. These are:
pysph.base
: This subpackage defines thepysph.base.particle_array.ParticleArray
, the various SPH Kernels, the nearest neighbor particle search (NNPS) code, and the Cython code generation utilities.pysph.sph
: Contains the various SPH equations, the Integrator related modules and associated integration steppers, and the code generation for the SPH looping.pysph.sph.wc
contains the equations for the weakly compressible formulation.pysph.sph.solid_mech
contains the equations for solid mechanics andpysph.sph.misc
has miscellaneous equations.pysph.solver
: Provides thepysph.solver.solver.Solver
, thepysph.solver.application.Application
and a convenient way to interact with the solver as it is running.pysph.parallel
: Provides the parallel functionality.pysph.tools
: Provides some useful tools including thepysph
script CLI and also the data viewer which is based on Mayavi.pysph.examples
: Provides many standard SPH examples. These examples are meant to be extended by users where needed. This is extremely handy to reproduce and compare SPH schemes.
Functions for loading/generating the particles¶
The code begins with a few functions related to obtaining the exact solution for the given problem which is used for comparing the computed solution.
A single new class called EllipticalDrop
which derives from
pysph.solver.application.Application
is defined. There are several
methods implemented on this class:
initialize
: lets users specify any parameters of interest relevant to the simulation.create_scheme
: lets the user specify thepysph.sph.scheme.Scheme
to use to solve the problem. Several standard schemes are already available and can be readily used.create_particles
: this method is where one creates the particles to be simulated.
Of these, create_particles
and create_scheme
are mandatory for without
them SPH would be impossible. The rest (and other methods) are optional. To
see a complete listing of possible methods that one can subclass see
pysph.solver.application.Application
.
The create_particles
method looks like:
class EllipticalDrop(Application):
# ...
def create_particles(self):
"""Create the circular patch of fluid."""
dx = self.dx
hdx = self.hdx
ro = self.ro
name = 'fluid'
x, y = mgrid[-1.05:1.05+1e-4:dx, -1.05:1.05+1e-4:dx]
x = x.ravel()
y = y.ravel()
m = ones_like(x)*dx*dx
h = ones_like(x)*hdx*dx
rho = ones_like(x) * ro
u = -100*x
v = 100*y
# remove particles outside the circle
indices = []
for i in range(len(x)):
if sqrt(x[i]*x[i] + y[i]*y[i]) - 1 > 1e-10:
indices.append(i)
pa = get_particle_array(x=x, y=y, m=m, rho=rho, h=h, u=u, v=v,
name=name)
pa.remove_particles(indices)
print("Elliptical drop :: %d particles"
% (pa.get_number_of_particles()))
self.scheme.setup_properties([pa])
return [pa]
The method is used to initialize the particles in Python. In PySPH, we use a
ParticleArray
object as a container for particles of a given
species. You can think of a particle species as any homogenous entity in a
simulation. For example, in a two-phase air water flow, a species could be
used to represent each phase. A ParticleArray
can be conveniently
created from the command line using NumPy arrays. For example
>>> from pysph.base.utils import get_particle_array
>>> x, y = numpy.mgrid[0:1:0.01, 0:1:0.01]
>>> x = x.ravel(); y = y.ravel()
>>> pa = sph.get_particle_array(x=x, y=y)
would create a ParticleArray
, representing a uniform distribution
of particles on a Cartesian lattice in 2D using the helper function
get_particle_array()
in the base subpackage. The
get_particle_array_wcsph()
is a special version of this suited to
weakly-compressible formulations.
Note
ParticleArrays in PySPH use flattened or one-dimensional arrays.
The ParticleArray
is highly convenient, supporting methods for
insertions, deletions and concatenations. In the create_particles
function, we use this convenience to remove a list of particles that fall
outside a circular region:
pa.remove_particles(indices)
where, a list of indices is provided. One could also provide the indices in the
form of a cyarray.carray.LongArray
which, as the name suggests, is
an array of 64 bit integers.
The particle array also supports what we call strided properties where you may associate multiple values per particle. Normally the stride length is 1. This feature is convenient if you wish to associate a matrix or vector of values per particle. You must still access the individual values as a “flattened” array but one can resize, remove, and add particles and the strided properties will be honored. For example:
>>> pa.add_property(name='A', data=2.0, default=-1.0, stride=2)
Will create a new property called 'A'
with a stride length of 2.
Note
Any one-dimensional (NumPy) array is valid input for PySPH. You can generate this from an external program for solid modelling and load it.
Note
PySPH works with multiple ParticleArrays. This is why we actually return a list in the last line of the get_circular_patch function above.
The create_particles
always returns a list of particle arrays even if
there is only one. The method self.scheme.setup_properties
automatically
adds any properties needed for the particular scheme being used.
Setting up the PySPH framework¶
As we move on, we encounter instantiations of the PySPH framework objects.
In this example, the pysph.sph.scheme.WCSPH
scheme is created in
the create_scheme
method. The WCSPHScheme
internally creates other
basic objects needed for the SPH simulation. In this case, the scheme
instance is passed a list of fluid particle array names and an empty list of
solid particle array names. In this case there are no solid boundaries. The
class is also passed a variety of values relevant to the scheme and
simulation. The kernel to be used is created and passed to the
configure_solver
method of the scheme. The
pysph.sph.integrator.EPECIntegrator
is used to integrate the
particle properties. Various solver related parametes are also setup.
def create_scheme(self):
s = WCSPHScheme(
['fluid'], [], dim=2, rho0=self.ro, c0=self.co,
h0=self.dx*self.hdx, hdx=self.hdx, gamma=7.0, alpha=0.1, beta=0.0
)
dt = 5e-6
tf = 0.0076
s.configure_solver(dt=dt, tf=tf)
return s
As can be seen, various options are configured for the solver, including initial damping etc. The scheme is responsible for:
- setting up the actual equations that describe the interactions between particles (see SPH equations),
- setting up the kernel (SPH Kernels) and integrator (Integrator related modules) to use for the simulation. In this case a default cubic spline kernel is used.
- setting up the Solver (Module solver), which marshalls the entire simulation.
For a more detailed introduction to these aspects of PySPH please read, the
A more detailed tutorial tutorial which provides greater detail on these.
However, by simply creating the WCSPHScheme
and creating the particles,
one can simulate the problem.
The astute reader may notice that the EllipticalDrop
example is
subclassing the Application
. This makes it easy to pass command
line arguments to the solver. It is also important for the seamless parallel
execution of the same example. To appreciate the role of the
Application
consider for a moment how might we write a parallel
version of the same example. At some point, we would need some MPI imports and
the particles should be created in a distributed fashion. All this (and more)
is handled through the abstraction of the Application
which hides
all this detail from the user.
Running the example¶
In the last two lines of the example, we instantiate the EllipticalDrop
class and run it:
if __name__ == '__main__':
app = EllipticalDrop()
app.run()
The Application
takes care of creating the particles, creating the
solver, handling command line arguments etc. Many parameters can be
configured via the command line, and these will override any parameters setup
in the respective create_*
methods. For example one may do the following
to find out the various options:
$ pysph run elliptical_drop_simple -h
If we run the example without any arguments it will run until a final time of 0.0075 seconds. We can change this example to 0.005 by the following:
$ pysph run elliptical_drop_simple --tf=0.005
When this is run, PySPH will generate Cython code from the equations and
integrators that have been provided, compiles that code and runs the
simulation. This provides a great deal of convenience for the user without
sacrificing performance. The generated code is available in
~/.pysph/source
. If the code/equations have not changed, then the code
will not be recompiled. This is all handled automatically without user
intervention. By default, output files will be generated in the directory
elliptical_drop_output
.
If we wish to utilize multiple cores we could do:
$ pysph run elliptical_drop_simple --openmp
If we wish to run the code in parallel (and have compiled PySPH with Zoltan and mpi4py) we can do:
$ mpirun -np 4 pysph run elliptical_drop_simple
This will automatically parallelize the run using 4 processors. In this example doing this will only slow it down as the number of particles is extremely small.
Visualizing and post-processing¶
You can view the data generated by the simulation (after the simulation
is complete or during the simulation) by running the pysph view
command. To view the simulated data you may do:
$ pysph view elliptical_drop_simple_output
If you have Mayavi installed this should show a UI that looks like:

For more help on the viewer, please run:
$ pysph view -h
On the user interface, the right side shows the visualized data. On top of it there are several toolbar icons. The left most is the Mayavi logo and clicking on it will present the full Mayavi user interface that can be used to configure any additional details of the visualization.
On the bottom left of the main visualization UI there is a button which has the text “Launch Python Shell”. If one clicks on this, one obtains a full Python interpreter with a few useful objects available. These are:
>>> dir()
['__builtins__', '__doc__', '__name__', 'interpolator', 'mlab',
'particle_arrays', 'scene', 'self', 'viewer']
>>> particle_arrays['fluid'].name
'fluid'
The particle_arrays
object is a dictionary of ParticleArrayHelpers
which is available in
pysph.tools.mayavi_viewer.ParticleArrayHelper
. The
interpolator
is an instance of
pysph.tools.mayavi_viewer.InterpolatorView
that is used by the
viewer. The other objects can be used to script the user interface if desired.
Note that the particle_arrays
can be indexed by array name or index.
Here is an example of scripting the viewer. Let us say we have two particle arrays, ‘boundary’ and ‘fluid’ in that order. Let us say, we wish to make the boundary translucent, then we can write the following:
b = particle_arrays['boundary']
b.plot.actor.property.opacity = 0.2
This does require some knowledge of Mayavi and scripting with it. The plot
attribute of the pysph.tools.mayavi_viewer.ParticleArrayHelper
is
a Glyph instance from Mayavi. It is useful to use the record feature
of Mayavi to learn more about how best to script the view.
The viewer will always look for a mayavi_config.py
script inside the
output directory to setup the visualization parameters. This file can be
created by overriding the pysph.solver.application.Application
object’s customize_output
method. See the dam break 3d
example to see this being used. Of course, this file can also be created
manually.
Loading output data files¶
The simulation data is dumped out either in *.hdf5
files (if one has h5py
installed) or *.npz
files otherwise. You may use the
pysph.solver.utils.load()
function to access the raw data
from pysph.solver.utils import load
data = load('elliptical_drop_100.hdf5')
# if one has only npz files the syntax is the same.
data = load('elliptical_drop_100.npz')
When opening the saved file with load
, a dictionary object is returned.
The particle arrays and other information can be obtained from this
dictionary:
particle_arrays = data['arrays']
solver_data = data['solver_data']
particle_arrays
is a dictionary of all the PySPH particle arrays.
You may obtain the PySPH particle array, fluid
, like so:
fluid = particle_arrays['fluid']
p = fluid.p
p
is a numpy array containing the pressure values. All the saved particle
array properties can thus be obtained and used for any post-processing task.
The solver_data
provides information about the iteration count, timestep
and the current time.
A good example that demonstrates the use of these is available in the
post_process
method of the elliptical_drop.py
example.
Interpolating properties¶
Data from the solver can also be interpolated using the
pysph.tools.interpolator.Interpolator
class. Here is the simplest
example of interpolating data from the results of a simulation onto a fixed
grid that is automatically computed from the known particle arrays:
from pysph.solver.utils import load
data = load('elliptical_drop_output/elliptical_drop_100.npz')
from pysph.tools.interpolator import Interpolator
parrays = data['arrays']
interp = Interpolator(list(parrays.values()), num_points=10000)
p = interp.interpolate('p')
p
is now a numpy array of size 10000 elements shaped such that it
interpolates all the data in the particle arrays loaded. interp.x
and
interp.y
are numpy arrays of the chosen x
and y
coordinates
corresponding to p
. To visualize this we may simply do:
from matplotlib import pyplot as plt
plt.contourf(interp.x, interp.y, p)
It is easy to interpolate any other property too. If one wishes to explicitly set the domain on which the interpolation is required one may do:
xmin, xmax, ymin, ymax, zmin, zmax = 0., 1., -1., 1., 0, 1
interp.set_domain((xmin, xmax, ymin, ymax, zmin, zmax), (40, 50, 1))
p = interp.interpolate('p')
This will create a meshgrid in the specified region with the specified number of points.
One could also explicitly set the points on which one wishes to interpolate the data as:
interp.set_interpolation_points(x, y, z)
Where x, y, z
are numpy arrays of the coordinates of the points on which
the interpolation is desired. This can also be done with the constructor as:
interp = Interpolator(list(parrays.values()), x=x, y=y, z=z)
There are some cases, where one may require a higher order interpolation or
gradient approximation of the property. This can be done by passing a
method
for interpolation to the interplator as:
interp = Interpolator(list(parrays.values()), num_points=10000, method='order1')
Currently, PySPH has three method of interpolation namely shepard
,
sph
and order1
. When order1
is set as method then one can get the
higher order interpolation or it’s derivative by just passing an extra
argument to the interpolate method suggesting the component. To get
derivative in x we can do as:
px = interp.interpolate('p', comp=1)
Here for comp=0, the interpolated property is returned and 1, 2, 3 will return gradient in x, y and z directions respectively.
For more details on the class and the available methods, see
pysph.tools.interpolator.Interpolator
.
In addition to this there are other useful pre and post-processing utilities described in Miscellaneous Tools for PySPH.
Viewing the data in an IPython notebook¶
PySPH makes it relatively easy to view the data inside an IPython notebook with minimal additional dependencies. A simple UI is provided to view the saved data using this interface. It requires jupyter, ipywidgets and ipympl. Currently, a 2D and 3D viewer are provided for the data. Here is a simple example of how one may use this in a notebook. Inside a notebook, one needs the following:
%matplotlib ipympl
from pysph.tools.ipy_viewer import Viewer2D
viewer = Viewer2D('dam_break_2d_output')
The viewer
has many useful methods:
viewer.show_info() # prints useful information about the run.
viewer.show_results() # plots any images in the output directory
viewer.show_log() # Prints the log file.
The most handy one is the one to perform interactive plots:
viewer.interactive_plot()
This shows a simple ipywidgets based UI that uses matplotlib to plot the data
on the browser. The different saved snapshots can be viewed using a convenient
slider. The viewer shows both the particles as well as simple vector plots.
This is convenient when one wishes to share and show the data without
requiring Mayavi. It does require pysph to be installed in order to be able to
load the files. It is mandatory to have the first line that sets the matplotlib
backend to ipympl
.
There is also a 3D viewer which may be used using Viewer3D
instead of the
Viewer2D
above. This viewer requires ipyvolume to be installed.
A slightly more complex example¶
The first example was very simple. In particular there was no post-processing of the results. Many pysph examples also include post processing code in the example. This makes it easy to reproduce results and also easily compare different schemes. A complete version of the elliptical drop example is available at elliptical_drop.py.
There are a few things that this example does a bit differently:
- It some useful code to generate the exact solution for comparison.
- It uses a
Gaussian
kernel and also uses a variety of different options for the solver (see how theconfigure_solver
is called) for various other options seepysph.solver.solver.Solver
.- The
EllipticalDrop
class has apost_process
method which optionally post-process the results generated. This in turn uses a couple of private methods_compute_results
and_make_final_plot
.- The last line of the code has a call to
app.post_process(...)
, which actually post-processes the data.
This example is therefore a complete example and shows how one could write a useful and re-usable PySPH example.
Doing more¶
The Application
has several more methods that can be used in
additional contexts, for example one may override the following additional
methods:
add_user_options
: this is used to create additional user-defined command line arguments. The command line options are available inself.options
and can be used in the other methods.consume_user_options
: this is called after the command line arguments are parsed, and can be optionally used to setup any variables that have been added by the user inadd_user_options
. Note that the method is called before the particles and solver etc. are created.create_domain
: this is used when a periodic domain is needed.create_inlet_outlet
: Override this to return any inlet an outlet objects. See thepysph.sph.simple_inlet_outlet
module.
There are many others, please see the Application
class
documentation to see these. The order of invocation of the various methods is
also documented there.
There are several examples that ship with PySPH, explore these to get a better idea of what is possible.
Debugging when things go wrong¶
When you attempt to run your own simulations you may run into a variety of errors. Some errors in setting up equations and the like are easy to detect and PySPH will provide an error message that should usually be helpful. If this is a Python related error you should get a traceback and debug it as you would debug any Python program.
PySPH writes out a log file in the output directory, looking at that is sometimes useful. The log file will usually tell you the kernel, integrator, NNPS, and the exact equations and groups used for a simulation. This can be often be very useful when sorting out subtle issues with the equations and groups.
Things get harder to debug when you get a segmentation fault or your code just crashes. Even though PySPH is implemented in Python you can get one of these if your timestep is too large or your equations are doing strange things (divide by zero, taking a square root of a negative number). This happens because PySPH translates your code into a lower-level language for performance. The following are the most common causes of a crash/segfault:
- The particles have “blown up”, this can happen when the accelerations are very large. This can also happen when your timestep is very large.
- There are mistakes in your equations or integrator step. Divide by zero, or some quantity was not properly initialized – for example if the particle masses were not correctly initialized and were set to zero you might get these errors. It is also possible that you have made some indexing errors in your arrays, check all your array accesses in your equations.
Let us see how we can debug these. Let us say your code is in example.py
,
you can do the following:
$ python example.py --pfreq 1 --detailed-output
In this case, the --pfreq 1
asks pysph to dump output at every timestep.
By default only specific properties that the user has requested are saved.
Using --detailed-output
dumps every property of every array. This includes
all accelerations as well. Viewing this data with the pysph view
command
makes it easy to see which acceleration is causing a problem.
Sometimes even this is not enough as the particles diverge or the code blows
up at the very first step of a multi-stage integrator. In this case, no output
would be generated. To debug the accelerations in this situation one may
define a method called pre_step
in your
pysph.solver.application.Application
subclass as follows:
class EllipticalDrop(Application):
# ...
def pre_step(self, solver):
solver.dump_output()
What this does is to ask the solver to dump the output right before each
timestep is taken. At the start of the simulations the first accelerations
have been calculated and since this output is now saved, one should be able to
debug the accelerations. Again, use --detailed-output
with this to look at
the accelerations right at the start.
A more detailed tutorial¶
In the previous tutorial (Learning the ropes) we provided a high level overview of the PySPH framework. No details were provided on equations, integrators and solvers. This tutorial assumes that you have read the previous one.
Recall that in the previous tutorial, a circular patch of fluid with a given
initial velocity field was simulated using a weaky-compressible SPH scheme.
In that example, a WCSPHScheme
object was created in the create_scheme
method. The details of what exactly the scheme does was not discussed. This
tutorial explains some of those details by solving the same problem using a
lower-level approach where the actual SPH equations, the integrator, and the
solver are created manually. This should help a user write their own schemes
or modify an existing scheme. The full code for this example can be seen in elliptical_drop_no_scheme.py.
Imports¶
This example requires a few more imports than the previous case.
the first several lines are imports of various modules:
import os
from numpy import array, ones_like, mgrid, sqrt
# PySPH base and carray imports
from pysph.base.utils import get_particle_array_wcsph
from pysph.base.kernels import Gaussian
# PySPH solver and integrator
from pysph.solver.application import Application
from pysph.solver.solver import Solver
from pysph.sph.integrator import EPECIntegrator
from pysph.sph.integrator_step import WCSPHStep
# PySPH sph imports
from pysph.sph.equation import Group
from pysph.sph.basic_equations import XSPHCorrection, ContinuityEquation
from pysph.sph.wc.basic import TaitEOS, MomentumEquation
Note
This is common for all examples that do not use a scheme and it is worth
noting the pattern of the PySPH imports. Fundamental SPH constructs like
the kernel and particle containers are imported from the base
subpackage. The framework related objects like the solver and integrator
are imported from the solver
subpackage. Finally, we import from the
sph
subpackage, the physics related part for this problem.
The methods defined for creating the particles are the same as in the previous
tutorial with the exception of the call to
self.scheme.setup_properties([pa])
. In this example, we do not create a
scheme, we instead create all the required PySPH objects from the
application. We do not override the create_scheme
method but instead have
two other methods called create_solver
and create_equations
which
handle this.
Setting up the PySPH framework¶
As we move on, we encounter instantiations of the PySPH framework objects.
These are the pysph.solver.application.Application
,
pysph.sph.integrator.TVDRK3Integrator
and
pysph.solver.solver.Solver
objects. The create_solver
method
constructs a Solver
instance and returns it as seen below:
def create_solver(self):
kernel = Gaussian(dim=2)
integrator = EPECIntegrator( fluid=WCSPHStep() )
dt = 5e-6; tf = 0.0076
solver = Solver(kernel=kernel, dim=2, integrator=integrator,
dt=dt, tf=tf, adaptive_timestep=True,
cfl=0.05, n_damp=50,
output_at_times=[0.0008, 0.0038])
return solver
As can be seen, various options are configured for the solver, including initial damping etc.
Intuitively, in an SPH simulation, the role of the
EPECIntegrator
should be obvious. In the code, we see that we
ask for the “fluid” to be stepped using a WCSPHStep
object. Taking
a look at the create_particles
method once more, we notice that the
ParticleArray representing the circular patch was named as fluid. So
we’re essentially asking the PySPH framework to step or integrate the
properties of the ParticleArray fluid using WCSPHStep
. It is
safe to assume that the framework takes the responsibility to call this
integrator at the appropriate time during a time-step.
The Solver
is the main driver for the problem. It marshals a
simulation and takes the responsibility (through appropriate calls to the
integrator) to update the solution to the next time step. It also handles
input/output and computing global quantities (such as minimum time step) in
parallel.
Specifying the interactions¶
At this stage, we have the particles (represented by the fluid ParticleArray) and the framework to integrate the solution and marshall the simulation. What remains is to define how to actually go about updating properties within a time step. That is, for each particle we must “do something”. This is where the physics for the particular problem comes in.
For SPH, this would be the pairwise interactions between particles. In PySPH, we provide a specific way to define the sequence of interactions which is a list of Equation objects (see SPH equations). For the circular patch test, the sequence of interactions is relatively straightforward:
- Compute pressure from the Equation of State (EOS): \(p = f(\rho)\)
- Compute the rate of change of density: \(\frac{d\rho}{dt}\)
- Compute the rate of change of velocity (accelerations): \(\frac{d\boldsymbol{v}}{dt}\)
- Compute corrections for the velocity (XSPH): \(\frac{d\boldsymbol{x}}{dt}\)
Care must be taken that the EOS equation should be evaluated for all the particles before the other equations are evaluated.
We request this in PySPH by creating a list of Equation
instances
in the create_equations
method:
def create_equations(self):
equations = [
Group(equations=[
TaitEOS(dest='fluid', sources=None, rho0=self.ro,
c0=self.co, gamma=7.0),
], real=False),
Group(equations=[
ContinuityEquation(dest='fluid', sources=['fluid',]),
MomentumEquation(dest='fluid', sources=['fluid'],
alpha=self.alpha, beta=0.0, c0=self.co),
XSPHCorrection(dest='fluid', sources=['fluid']),
]),
]
return equations
Each Group
instance is completed before the next is taken up. Each group
contains a list of Equation
objects. Each interaction is specified
through an Equation
object, which is instantiated with the general
syntax:
Equation(dest='array_name', sources, **kwargs)
The dest
argument specifies the target or destination
ParticleArray on which this interaction is going to operate
on. Similarly, the sources
argument specifies a list of
ParticleArrays from which the contributions are sought. For some
equations like the EOS, it doesn’t make sense to define a list of
sources and a None
suffices. The specification basically tells PySPH
that for one time step of the calculation:
- Use the Tait’s EOS to update the properties of the fluid array
- Compute \(\frac{d\rho}{dt}\) for the fluid from the fluid
- Compute accelerations for the fluid from the fluid
- Compute the XSPH corrections for the fluid, using fluid as the source
Note
Notice the use of the ParticleArray name “fluid”. It is the responsibility of the user to ensure that the equation specification is done in a manner consistent with the creation of the particles.
With the list of equations, our problem is completely defined. PySPH now knows what to do with the particles within a time step. More importantly, this information is enough to generate code to carry out a complete SPH simulation. For more details on how new equations can be written please read The PySPH framework.
The example may be run the same way as the previous example:
$ pysph run elliptical_drop_no_scheme
The resulting output can be analyzed or viewed the same way as in the previous example.
In the previous example (Learning the ropes), the equations and solver
are created automatically by the WCSPHScheme
. If the create_scheme
is
overwritten and returns a scheme, the create_equations
and create_solver
need not be implemented. For more details on the various application methods,
please see pysph.solver.application.Application
. Implementing other
schemes can be done by either implementing the equations directly as done in
this example or one could implement a new pysph.sph.scheme.Scheme
.
The framework and library¶
The PySPH framework¶
This document is an introduction to the design of PySPH. This provides additional high-level details on the functionality that the PySPH framework provides. This should allow you to use PySPH effectively and extend the framework to solve problems other than those provided in the main distribution.
To elucidate some of the internal details of PySPH, we will consider a typical SPH problem and proceed to write the code that implements it. Thereafter, we will look at how this is implemented using the PySPH framework.
The dam-break problem¶
The problem that is used for the illustration is the Weakly Compressible SPH (WCSPH) formulation for free surface flows, applied to a breaking dam problem:

A column of water is initially at rest (presumably held in place by some membrane). The problem simulates a breaking dam in that the membrane is instantly removed and the column is free to fall under its own weight and the effect of gravity. This and other variants of the dam break problem can be found in the examples directory of PySPH.
Equations¶
The discrete equations for this formulation are given as
Boundary conditions¶
The dam break problem involves two types of particles. Namely, the fluid (water column) and solid (tank). The basic boundary condition enforced on a solid wall is the no-penetration boundary condition which can be stated as
Where \(\vec{n_b}\) is the local normal vector for the boundary. For this example, we use the dynamic boundary conditions. For this boundary condition, the boundary particles are treated as fixed fluid particles that evolve with the continuity ((2)) and equation the of state ((1)). In addition, they contribute to the fluid acceleration via the momentum equation ((3)). When fluid particles approach a solid wall, the density of the fluids and the solids increase via the continuity equation. With the increased density and consequently increased pressure, the boundary particles express a repulsive force on the fluid particles, thereby enforcing the no-penetration condition.
Time integration¶
For the time integration, we use a second order predictor-corrector integrator. For the predictor stage, the following operations are carried out:
Once the variables are predicted to their half time step values, the pairwise interactions are carried out to compute the accelerations. Subsequently, the corrector is used to update the particle positions:
Note
The acceleration variables are prefixed like \(a_\). The boldface symbols in the above equations indicate vector quantities. Thus \(a_\boldsymbol{v}\) represents \(a_u,\, a_v,\, \text{and}\, a_w\) for the vector components of acceleration.
Required arrays and properties¶
We will be using two ParticleArrays (see
pysph.base.particle_array.ParticleArray
), one for the fluid and
another for the solid. Recall that for the dynamic boundary conditions, the
solid is treated like a fluid with the only difference being that the velocity
(\(a_\boldsymbol{v}\)) and position accelerations (\(a_\boldsymbol{x}
= \boldsymbol{u} + \boldsymbol{u}^{\text{XSPH}}\)) are never calculated. The
solid particles therefore remain fixed for the duration of the simulation.
To carry out the integrations for the particles, we require the following variables:
- SPH properties: x, y, z, u, v, w, h, m, rho, p, cs
- Acceleration variables: au, av, aw, ax, ay, az, arho
- Properties at the beginning of a time step: x0, y0, z0, u0, v0, w0, rho0
A non-PySPH implementation¶
We first consider the pseudo-code for the non-PySPH implementation. We assume
we have been given two ParticleArrays fluid and solid corresponding to
the dam-break problem. We also assume that an pysph.base.nnps.NNPS
object nps is available and can be used for neighbor queries:
from pysph.base import nnps
fluid = get_particle_array_fluid(...)
solid = get_particle_array_solid(...)
particles = [fluid, solid]
nps = nnps.LinkedListNNPS(dim=2, particles=particles, radius_scale=2.0)
The part of the code responsible for the interactions can be defined as
class SPHCalc:
def __init__(nnps, particles):
self.nnps = nnps
self.particles = particles
def compute(self):
self.eos()
self.accelerations()
def eos(self):
for array in self.particles:
num_particles = array.get_number_of_particles()
for i in range(num_particles):
array.p[i] = # TAIT EOS function for pressure
array.cs[i] = # TAIT EOS function for sound speed
def accelerations(self):
fluid, solid = self.particles[0], self.particles[1]
nps = self.nps
nbrs = UIntArray()
# continuity equation for the fluid
dst = fluid; dst_index = 0
# source is fluid
src = fluid; src_index = 0
num_particles = dst.get_number_of_particles()
for i in range(num_particles):
# get nearest fluid neigbors
nps.get_nearest_particles(src_index, dst_index, d_idx=i, nbrs)
for j in nbrs:
# pairwise quantities
xij = dst.x[i] - src.x[j]
yij = dst.y[i] - src.y[j]
...
# kernel interaction terms
wij = kenrel.function(xi, ...) # kernel function
dwij= kernel.gradient(xi, ...) # kernel gradient
# compute the interaction and store the contribution
dst.arho[i] += # interaction term
# source is solid
src = solid; src_index = 1
num_particles = dst.get_number_of_particles()
for i in range(num_particles):
# get nearest fluid neigbors
nps.get_nearest_particles(src_index, dst_index, d_idx=i, nbrs)
for j in nbrs:
# pairwise quantities
xij = dst.x[i] - src.x[j]
yij = dst.y[i] - src.y[j]
...
# kernel interaction terms
wij = kenrel.function(xi, ...) # kernel function
dwij= kernel.gradient(xi, ...) # kernel gradient
# compute the interaction and store the contribution
dst.arho[i] += # interaction term
# Destination is solid
dst = solid; dst_index = 1
# source is fluid
src = fluid; src_index = 0
num_particles = dst.get_number_of_particles()
for i in range(num_particles):
# get nearest fluid neigbors
nps.get_nearest_particles(src_index, dst_index, d_idx=i, nbrs)
for j in nbrs:
# pairwise quantities
xij = dst.x[i] - src.x[j]
yij = dst.y[i] - src.y[j]
...
# kernel interaction terms
wij = kenrel.function(xi, ...) # kernel function
dwij= kernel.gradient(xi, ...) # kernel gradient
# compute the interaction and store the contribution
dst.arho[i] += # interaction term
We see that the use of multiple particle arrays has forced us to write a fairly long piece of code for the accelerations. In fact, we have only shown the part of the main loop that computes \(a_\rho\) for the continuity equation. Recall that our problem states that the continuity equation should evaluated for all particles, taking influences from all other particles into account. For two particle arrays (fluid, solid), we have four such pairings (fluid-fluid, fluid-solid, solid-fluid, solid-solid). The last one can be eliminated when we consider the that the boundary has zero velocity and hence the contribution will always be trivially zero.
The apparent complexity of the SPHCalc.accelerations method notwithstanding, we notice that similar pieces of the code are being repeated. In general, we can break down the computation for a general source-destination pair like so:
# consider first destination particle array
for all dst particles:
get_neighbors_from_source()
for all neighbors:
compute_pairwise_terms()
compute_inteactions_for_dst_particle()
# consider next source for this destination particle array
...
# consider the next destination particle array
Note
The SPHCalc.compute method first calls the EOS before calling the main loop to compute the accelerations. This is because the EOS (which updates the pressure) must logically be completed for all particles before the accelerations (which uses the pressure) are computed.
The predictor-corrector integrator for this problem can be defined as
class Integrator:
def __init__(self, particles, nps, calc):
self.particles = particles
self.nps = nps
self.calc = calc
def initialize(self):
for array in self.particles:
array.rho0[:] = array.rho[:]
...
array.w0[:] = array.w[:]
def stage1(self, dt):
dtb2 = 0.5 * dt
for array in self.particles:
array.rho = array.rho0[:] + dtb2*array.arho[:]
array.u = array.u0[:] + dtb2*array.au[:]
array.v = array.v0[:] + dtb2*array.av[:]
...
array.z = array.z0[:] + dtb2*array.az[:]
def stage2(self, dt):
for array in self.particles:
array.rho = array.rho0[:] + dt*array.arho[:]
array.u = array.u0[:] + dt*array.au[:]
array.v = array.v0[:] + dt*array.av[:]
...
array.z = array.z0[:] + dt*array.az[:]
def integrate(self, dt):
self.initialize()
self.stage1(dt) # predictor step
self.nps.update() # update NNPS structure
self.calc.compute() # compute the accelerations
self.stage2(dt) # corrector step
The Integrator.integrate method is responsible for updating the solution the next time level. Before the predictor stage, the Integrator.initialize method is called to store the values x0, y0… at the beginning of a time-step. Given the positions of the particles at the half time-step, the NNPS data structure is updated before calling the SPHCalc.compute method. Finally, the corrector step is called once we have the updated accelerations.
This hypothetical implementation can be integrated to the final time by calling the Integrator.integrate method repeatedly. In the next section, we will see how PySPH does this automatically.
PySPH implementation¶
Now that we have a hypothetical implementation outlined, we can proceed to describe the abstractions that PySPH introduces, enabling a highly user friendly and flexible way to define pairwise particle interactions. To see a working example, see dam_break_2d.py.
We assume that we have the same ParticleArrays (fluid and solid) and NNPS objects as before.
Specifying the equations¶
Given the particle arrays, we ask for a given set of operations to be
performed on the particles by passing a list of Equation objects (see
SPH equations) to the Solver (see
pysph.solver.solver.Solver
)
equations = [
# Equation of state
Group(equations=[
TaitEOS(dest='fluid', sources=None, rho0=ro, c0=co, gamma=gamma),
TaitEOS(dest='boundary', sources=None, rho0=ro, c0=co, gamma=gamma),
], real=False),
Group(equations=[
# Continuity equation
ContinuityEquation(dest='fluid', sources=['fluid', 'boundary']),
ContinuityEquation(dest='boundary', sources=['fluid']),
# Momentum equation
MomentumEquation(dest='fluid', sources=['fluid', 'boundary'],
alpha=alpha, beta=beta, gy=-9.81, c0=co),
# Position step with XSPH
XSPHCorrection(dest='fluid', sources=['fluid'])
]),
]
We see that we have used two Group objects (see
pysph.sph.equation.Group
), segregating two parts of the evaluation
that are logically dependent. The second group, where the accelerations are
computed must be evaluated after the first group where the pressure is
updated. Recall we had to do a similar seggregation for the SPHCalc.compute
method in our hypothetical implementation:
class SPHCalc:
def __init__(nnps, particles):
...
def compute(self):
self.eos()
self.accelerations()
Note
PySPH will respect the order of the Equation and equation Groups as provided by the user. This flexibility also means it is quite easy to make subtle errors.
Note that in the first group, we have an additional parameter called
real=False
. This is only relevant for parallel simulations and for
simulations with periodic boundaries. What it says is that the equations in
that group should be applied to all particles (remote and local), non-local
particles are not “real”. By default a Group
has real=True
, thus only
local particles are operated on. However, we wish to apply the Equation of
state on all particles. Similar is the case for periodic problems where it is
sometimes necessary to set real=True
in order to set the properties of the
additional particles used for periodicity.
Writing the equations¶
It is important for users to be able to easily write out new SPH equations of motion. PySPH provides a very convenient way to write these equations. The PySPH framework allows the user to write these equations in pure Python. These pure Python equations are then used to generate high-performance code and then called appropriately to perform the simulations.
There are two types of particle computations in SPH simulations:
- The most common type of interaction is to change the property of one particle (the destination) using the properties of a source particle.
- A less common type of interaction is to calculate say a sum (or product or maximum or minimum) of values of a particular property. This is commonly called a “reduce” operation in the context of Map-reduce programming models.
Computations of the first kind are inherently parallel and easy to perform correctly both in serial and parallel. Computations of the second kind (reductions) can be tricky in parallel. As a result, in PySPH we distinguish between the two. This will be elaborated in more detail in the following.
In general an SPH algorithm proceeds as the following pseudo-code illustrates:
for destination in particles:
for equation in equations:
equation.initialize(destination)
# This is where bulk of the computation happens.
for destination in particles:
for source in destination.neighbors:
for equation in equations:
equation.loop(source, destination)
for destination in particles:
for equation in equations:
equation.post_loop(destination)
# Reduce any properties if needed.
total_mass = reduce_array(particles.m, 'sum')
max_u = reduce_array(particles.u, 'max')
The neighbors of a given particle are identified using a nearest neighbor algorithm. PySPH does this automatically for the user and internally uses a link-list based algorithm to identify neighbors.
In PySPH we follow some simple conventions when writing equations. Let us look
at a few equations first. In keeping the analogy with our hypothetical
implementation and the SPHCalc.accelerations method above, we consider the
implementations for the PySPH pysph.sph.wc.basic.TaitEOS
and
pysph.sph.basic_equations.ContinuityEquation
objects. The former
looks like:
class TaitEOS(Equation):
def __init__(self, dest, sources=None,
rho0=1000.0, c0=1.0, gamma=7.0):
self.rho0 = rho0
self.rho01 = 1.0/rho0
self.c0 = c0
self.gamma = gamma
self.gamma1 = 0.5*(gamma - 1.0)
self.B = rho0*c0*c0/gamma
super(TaitEOS, self).__init__(dest, sources)
def loop(self, d_idx, d_rho, d_p, d_cs):
ratio = d_rho[d_idx] * self.rho01
tmp = pow(ratio, self.gamma)
d_p[d_idx] = self.B * (tmp - 1.0)
d_cs[d_idx] = self.c0 * pow( ratio, self.gamma1 )
Notice that it has only one loop
method and this loop is applied
for all particles. Since there are no sources, there is no need for
us to find the neighbors. There are a few important conventions that
are to be followed when writing the equations.
d_*
indicates a destination array.s_*
indicates a source array.d_idx
ands_idx
represent the destination and source index respectively.- Each function can take any number of arguments as required, these are automatically supplied internally when the application runs.
- All the standard math symbols from
math.h
are also available.
Let us look at the ContinuityEquation
as another simple example.
It is instantiated as:
class ContinuityEquation(Equation):
def initialize(self, d_idx, d_arho):
d_arho[d_idx] = 0.0
def loop(self, d_idx, d_arho, s_idx, s_m, DWIJ, VIJ):
vijdotdwij = DWIJ[0]*VIJ[0] + DWIJ[1]*VIJ[1] + DWIJ[2]*VIJ[2]
d_arho[d_idx] += s_m[s_idx]*vijdotdwij
Notice that the initialize
method merely sets the value to zero. The
loop
method also accepts a few new quantities like DWIJ
, VIJ
etc.
These are precomputed quantities and are automatically provided depending on
the equations needed for a particular source/destination pair. The following
precomputed quantites are available and may be passed into any equation:
HIJ = 0.5*(d_h[d_idx] + s_h[s_idx])
.XIJ[0] = d_x[d_idx] - s_x[s_idx]
,XIJ[1] = d_y[d_idx] - s_y[s_idx]
,XIJ[2] = d_z[d_idx] - s_z[s_idx]
R2IJ = XIJ[0]*XIJ[0] + XIJ[1]*XIJ[1] + XIJ[2]*XIJ[2]
RIJ = sqrt(R2IJ)
WIJ = KERNEL(XIJ, RIJ, HIJ)
WJ = KERNEL(XIJ, RIJ, s_h[s_idx])
RHOIJ = 0.5*(d_rho[d_idx] + s_rho[s_idx])
WI = KERNEL(XIJ, RIJ, d_h[d_idx])
RHOIJ1 = 1.0/RHOIJ
DWIJ
:GRADIENT(XIJ, RIJ, HIJ, DWIJ)
DWJ
:GRADIENT(XIJ, RIJ, s_h[s_idx], DWJ)
DWI
:GRADIENT(XIJ, RIJ, d_h[d_idx], DWI)
VIJ[0] = d_u[d_idx] - s_u[s_idx]
VIJ[1] = d_v[d_idx] - s_v[s_idx]
VIJ[2] = d_w[d_idx] - s_w[s_idx]
EPS = 0.01 * HIJ * HIJ
In addition if one requires the current time or the timestep in an equation, the following may be passed into any of the methods of an equation:
t
: is the current time.dt
: the current time step.
Note
Note that all standard functions and constants in math.h
are available
for use in the equations. The value of \(\pi\) is available in
M_PI
. Please avoid using functions from numpy
as these are Python
functions and are slow. They also will not allow PySPH to be run with
OpenMP. Similarly, do not use functions or constants from sympy
and
other libraries inside the equation methods as these will significantly
slow down your code.
In addition, these constants from the math library are available:
M_E
: value of eM_LOG2E
: value of log2eM_LOG10E
: value of log10eM_LN2
: value of loge2M_LN10
: value of loge10M_PI
: value of piM_PI_2
: value of pi / 2M_PI_4
: value of pi / 4M_1_PI
: value of 1 / piM_2_PI
: value of 2 / piM_2_SQRTPI
: value of 2 / (square root of pi)M_SQRT2
: value of square root of 2M_SQRT1_2
: value of square root of 1/2
In an equation, any undeclared variables are automatically declared to be
doubles in the high-performance Cython code that is generated. In addition
one may declare a temporary variable to be a matrix
or a cPoint
by
writing:
mat = declare("matrix((3,3))")
point = declare("cPoint")
When the Cython code is generated, this gets translated to:
cdef double[3][3] mat
cdef cPoint point
One can also declare any valid c-type using the same approach, for example if
one desires a long
data type, one may use ii = declare("long")
.
One may also perform any reductions on properties. Consider a trivial example
of calculating the total mass and the maximum u
velocity in the following
equation:
class FindMaxU(Equation):
def reduce(self, dst, t, dt):
m = serial_reduce_array(dst.m, 'sum')
max_u = serial_reduce_array(dst.u, 'max')
dst.total_mass[0] = parallel_reduce_array(m, 'sum')
dst.max_u[0] = parallel_reduce_array(u, 'max')
where:
dst
: refers to a destinationParticleArray
.t, dt
: are the current time and timestep respectively.serial_reduce_array
: is a special function provided that performs reductions correctly in serial. It currently supportssum, prod, max
andmin
operations. Seepysph.base.reduce_array.serial_reduce_array()
. There is also apysph.base.reduce_array.parallel_reduce_array()
which is to be used to reduce an array across processors. Usingparallel_reduce_array
is expensive as it is an all-to-all communication. One can reduce these by using a single array and use that to reduce the communication.
We recommend that for any kind of reductions one always use the
serial_reduce_array
function and the parallel_reduce_array
inside a
reduce
method. One should not worry about parallel/serial modes in this
case as this is automatically taken care of by the code generator. In serial,
the parallel reduction does nothing.
With this machinery, we are able to write complex equations to solve almost any SPH problem. A user can easily define a new equation and instantiate the equation in the list of equations to be passed to the application. It is often easiest to look at the many existing equations in PySPH and learn the general patterns.
If you wish to use adaptive time stepping, see the code
pysph.sph.integrator.Integrator
. The integrator uses information
from the arrays dt_cfl
, dt_force
, and dt_visc
in each of the
particle arrays to determine the most suitable time step.
For a more focused discussion on how you should write equations, please see Writing equations.
Writing the Integrator¶
The integrator stepper code is similar to the equations in that they are all written in pure Python and Cython code is automatically generated from it. The simplest integrator is the Euler integrator which looks like this:
class EulerIntegrator(Integrator):
def one_timestep(self, t, dt):
self.initialize()
self.compute_accelerations()
self.stage1()
self.do_post_stage(dt, 1)
Note that in this case the integrator only needs to implement one timestep
using the one_timestep
method above. The initialize
and stage
methods need to be implemented in stepper classes which perform the actual
stepping of the values. Here is the stepper for the Euler integrator:
class EulerStep(IntegratorStep):
def initialize(self):
pass
def stage1(self, d_idx, d_u, d_v, d_w, d_au, d_av, d_aw, d_x, d_y,
d_z, d_rho, d_arho, dt=0.0):
d_u[d_idx] += dt*d_au[d_idx]
d_v[d_idx] += dt*d_av[d_idx]
d_w[d_idx] += dt*d_aw[d_idx]
d_x[d_idx] += dt*d_u[d_idx]
d_y[d_idx] += dt*d_v[d_idx]
d_z[d_idx] += dt*d_w[d_idx]
d_rho[d_idx] += dt*d_arho[d_idx]
As can be seen the general structure is very similar to how equations are
written in that the functions take an arbitrary number of arguments and are
set. The value of dt
is also provided automatically when the methods are
called.
It is important to note that if there are additional variables to be stepped in addition to these standard ones, you must write your own stepper. Currently, only certain steppers are supported by the framework. Take a look at the Integrator related modules for more examples.
Simulating periodicity¶
PySPH provides a simplistic implementation for problems with periodicity. The
pysph.base.nnps_base.DomainManager
is used to specify this. To use
this in an application simply define a method as follows:
# ...
from pysph.base.nnps import DomainManager
class TaylorGreen(Application):
def create_domain(self):
return DomainManager(
xmin=0.0, xmax=1.0, ymin=0.0, ymax=1.0,
periodic_in_x=True, periodic_in_y=True
)
# ...
This is a 2D example but something similar can be done in 3D. How this works is that PySPH will automatically copy the appropriate layer of the particles from each side of the domain and create “Ghost” particles (these are not “real” particles). The properties of the particles will also be copied but this is done before any accelerations are computed. Note that this implies that the real particles should be created carefully so as to avoid two particles being placed at the same location.
For example in the above example, the domain is defined in the unit square
with one corner at the origin and the other at (1,1). If we place any
particles exactly at \(x=0.0\) they will be copied over to 1.0 and if we
place any particles at \(x=1.0\) they will be copied to \(x=0\). This
will mean that there will be one real particle at 0 and a copy from 1.0 as
well at the same location. It is therefore important to initialize the
particles starting at dx/2
and all the way up-to 1.0-dx/2
so as to get
a uniform distribution of particles without any repetitions. It is important
to remember that the periodic particles will be “ghost” particles and so any
equations that set properties like pressure should be in a group with
real=False
.
Writing equations¶
This document puts together all the essential information on how to write equations. We assume that you have already read the section The PySPH framework. Some information is repeated from there as well.
The PySPH equations are written in a very restricted way. The reason for this is that if you do follow the suggestions and the conventions below you will benefit from:
- a high-performance serial implementation.
- support for using your equations with OpenMP.
- support for running on a GPU.
These are the main motivations for the severe restrictions we impose when you write your equations.
Overview¶
PySPH takes the equations you write and converts them on the fly to a high-performance implementation suitable for the particular backend you request.
It is important to understand the overall structure of how the equations are
used when the high-performance code is generated. Let us look at the different
methods of a typical Equation
subclass:
class YourEquation(Equation):
def __init__(self, dest, sources):
# Overload this only if you need to pass additional constants
# Otherwise, no need to override __init__
def py_initialize(self, dst, t, dt):
# Called once per destination array before initialize.
# This is a pure Python function and is not translated.
def initialize(self, d_idx, ...):
# Called once per destination particle before loop.
def initialize_pair(self, d_idx, d_*, s_*):
# Called once per destination particle for each source.
# Can access all source arrays. Does not have
# access to neighbor information.
def loop_all(self, d_idx, ..., NBRS, N_NBRS, ...):
# Called once before the loop and can be used
# for non-pairwise interactions as one can pass the neighbors
# for particle d_idx.
def loop(self, d_idx, s_idx, ...):
# loop over neighbors for all sources,
# called once for each pair of particles!
def post_loop(self, d_idx ...):
# called after all looping is done.
def reduce(self, dst, t, dt):
# Called once for the destination array.
# Any Python code can go here.
def converged(self):
# return > 0 for convergence < 0 for lack of convergence
It is easier to understand this if we take a specific example. Let us say, we
have a case where we have two particle arrays 'fluid', 'solid'
. Let us say
the equation is used as YourEquation(dest='fluid', sources=['fluid',
'solid'])
. Now given this context, let us see what happens when this
equation is used. What happens is as follows:
- for each destination particle array (
'fluid'
in this case), thepy_initialize
method is called and is passed the destination particle array,t
anddt
(similar toreduce
). This function is a pure Python function so you can do what you want here, including importing any Python code and run anything you want. The code is NOT transpiled into C/OpenCL/CUDA. - for each fluid particle, the
initialize
method is called with the required arrays. - for each fluid particle, the
initialize_pair
method is called while having access to all the fluid arrays. - the fluid neighbors for each fluid particle are found for each particle
and can be passed en-masse to the
loop_all
method. One can passNBRS
which is an array of unsigned ints with indices to the neighbors in the source particles.N_NBRS
is the number of neighbors (an integer). This method is ideal for any non-pairwise computations or more complex computations. - the fluid neighbors for each fluid particle are found and for each pair,
the
loop
method is called with the required properties/values. - for each fluid particle, the
initialize_pair
method is called while having access to all the solid arrays. - the solid neighbors for each fluid particle are found and for each pair,
the
loop
method is called with the required properties/values. - for each fluid particle, the
post_loop
method is called with the required properties. - If a reduce method exists, it is called for the destination (only once, not once per particle). It is passed the destination particle array and the time and timestep. It is transpiled when you are using Cython but is a pure Python function when you run this via OpenCL or CUDA.
The initialize, initialize_pair, loop_all, loop, post_loop
methods all may
be called in separate threads (both on CPU/GPU) depending on the
implementation of the backend.
It is possible to set a scalar value in the equation as an instance attribute,
i.e. by setting self.something = value
but remember that this is just one
value for the equation. This value must also be initialized in the
__init__
method. Also make sure that the attributes are public and not
private (i.e. do not start with an underscore). There is only one equation
instance used in the code, not one equation per thread or particle. So if you
wish to calculate a temporary quantity for each particle, you should create a
separate property for it and use that instead of assuming that the initialize
and loop functions run in serial. They do not run in serial when you use
OpenMP or OpenCL. So do not create temporary arrays inside the equation for
these sort of things. In general if you need a constant per destination array,
add it as a constant to the particle array. Also note that you can add
properties that have strides (see Learning the ropes and look for
“stride”).
Now, if the group containing the equation has iterate
set to True, then
the group will be iterated until convergence is attained for all the equations
(or sub-groups) contained by it. The converged
method is called once and
not once per particle.
If you wish to compute something like a convergence condition, like the maximum error or the average error, you should do it in the reduce method.
The reduce function is called only once every time the accelerations are
evaluated. As such you may write any Python code there. The only caveat is
that when using the CPU, one will have to declare any variables used a little
carefully – ideally declare any variables used in this as
declare('object')
. On the GPU, this function is not called via OpenCL and
is a pure Python function.
Understanding Groups a bit more¶
Equations can be grouped together and it is important to understand how
exactly this works. Let us take a simple example of a Group
with
two equations. We illustrate two simple equations with pseudo-code:
class Eq1(Equation):
def initialize(self, ...):
# ...
def loop(...):
# ...
def post_loop(...):
# ...
Let us say that Eq2
has a similar structure with respect to its methods.
Let us say we have a group defined as:
Group(
equations=[
Eq1(dest='fluid', sources=['fluid', 'solid']),
Eq2(dest='fluid', sources=['fluid', 'solid']),
]
)
When this is expanded out and used inside PySPH, this is what happens in terms of pseudo-code:
# Instances of the Eq1, and Eq2.
eq1 = Eq1(...)
eq2 = Eq2(...)
for d_idx in range(n_destinations):
eq1.initialize(...)
eq2.initialize(...)
# Sources from 'fluid'
for d_idx in range(n_destinations):
for s_idx in NEIGHBORS('fluid', d_idx):
eq1.loop(...)
eq2.loop(...)
# Sources from 'solid'
for d_idx in range(n_destinations):
for s_idx in NEIGHBORS('solid', d_idx):
eq1.loop(...)
eq2.loop(...)
for d_idx in range(n_destinations):
eq1.post_loop(...)
eq2.post_loop(...)
That is, all the initialization is done for each equation in sequence,
followed by the loops for each set of sources, fluid and solid in this case.
In the end, the post_loop
is called for the destinations. The equations
are therefore merged inside a group and entirely completed before the next
group is taken up. Note that the order of the equations will be exactly as
specified in the group.
When the real=False
is used, then the non-local destination particles
are also iterated over. real=True
by default, which means that only
destination particles whose tag
property is local or equal to 0 are
operated on. Otherwise, when real=False
, remote and ghost particles are
also operated on. It is important to note that this does not affect the source
particles. That is, ALL source particles influence the destinations
whether the sources are local, remote or ghost particles. The real
keyword
argument only affects the destination particles and not the sources.
Note that if you have different destinations in the same group, they are
internally split up into different sets of loops for each destination and that
these are done separately. I.e. one destination is fully processed and then
the next is considered. So if we had for example, both fluid
and solid
destinations, they would be processed separately. For example lets say you had
this:
Group(
equations=[
Eq1(dest='fluid', sources=['fluid', 'solid']),
Eq1(dest='solid', sources=['fluid', 'solid']),
Eq2(dest='fluid', sources=['fluid', 'solid']),
Eq2(dest='solid', sources=['fluid', 'solid']),
]
)
This would internally be equivalent to the following:
[
Group(
equations=[
Eq1(dest='fluid', sources=['fluid', 'solid']),
Eq2(dest='fluid', sources=['fluid', 'solid']),
]
),
Group(
equations=[
Eq1(dest='solid', sources=['fluid', 'solid']),
Eq2(dest='solid', sources=['fluid', 'solid']),
]
)
]
Note that basically the fluids are done first and then the solid particles are done. Obviously the first form is a lot more compact.
While it may appear that the PySPH equations and groups are fairly complex, they actually do a lot of work for you and allow you to express the interactions in a rather compact form.
When debugging it sometimes helps to look at the generated log file which will also print out the exact equations and groups that are being used.
Conventions followed¶
There are a few important conventions that are to be followed when writing the
equations. When passing arguments to the initialize, loop, post_loop
methods,
d_*
indicates a destination array.s_*
indicates a source array.d_idx
ands_idx
represent the destination and source index respectively.- Each function can take any number of arguments as required, these are automatically supplied internally when the application runs.
- All the standard math symbols from
math.h
are also available.
The following precomputed quantites are available and may be passed into any equation:
HIJ = 0.5*(d_h[d_idx] + s_h[s_idx])
.XIJ[0] = d_x[d_idx] - s_x[s_idx]
,XIJ[1] = d_y[d_idx] - s_y[s_idx]
,XIJ[2] = d_z[d_idx] - s_z[s_idx]
R2IJ = XIJ[0]*XIJ[0] + XIJ[1]*XIJ[1] + XIJ[2]*XIJ[2]
RIJ = sqrt(R2IJ)
WIJ = KERNEL(XIJ, RIJ, HIJ)
WJ = KERNEL(XIJ, RIJ, s_h[s_idx])
RHOIJ = 0.5*(d_rho[d_idx] + s_rho[s_idx])
WI = KERNEL(XIJ, RIJ, d_h[d_idx])
RHOIJ1 = 1.0/RHOIJ
DWIJ
:GRADIENT(XIJ, RIJ, HIJ, DWIJ)
DWJ
:GRADIENT(XIJ, RIJ, s_h[s_idx], DWJ)
DWI
:GRADIENT(XIJ, RIJ, d_h[d_idx], DWI)
VIJ[0] = d_u[d_idx] - s_u[s_idx]
VIJ[1] = d_v[d_idx] - s_v[s_idx]
VIJ[2] = d_w[d_idx] - s_w[s_idx]
EPS = 0.01 * HIJ * HIJ
SPH_KERNEL
: the kernel being used and one can call the kernel asSPH_KERNEL.kernel(xij, rij, h)
the gradient asSPH_KERNEL.gradient(...)
,SPH_KERNEL.gradient_h(...)
etc. The kernel is any one of the instances of the kernel classes defined inpysph.base.kernels
In addition if one requires the current time or the timestep in an equation, the following may be passed into any of the methods of an equation:
t
: is the current time.dt
: the current time step.
For the loop_all
method and the loop
method, one may also pass the
following:
NBRS
: an array of unsigned ints with neighbor indices.N_NBRS
: an integer denoting the number of neighbors for the current destination particle with index,d_idx
.
Note
Note that all standard functions and constants in math.h
are available
for use in the equations. The value of \(\pi\) is available as
M_PI
. Please avoid using functions from numpy
as these are Python
functions and are slow. They also will not allow PySPH to be run with
OpenMP. Similarly, do not use functions or constants from sympy
and
other libraries inside the equation methods as these will significantly
slow down your code.
In addition, these constants from the math library are available:
M_E
: value of eM_LOG2E
: value of log2eM_LOG10E
: value of log10eM_LN2
: value of loge2M_LN10
: value of loge10M_PI
: value of piM_PI_2
: value of pi / 2M_PI_4
: value of pi / 4M_1_PI
: value of 1 / piM_2_PI
: value of 2 / piM_2_SQRTPI
: value of 2 / (square root of pi)M_SQRT2
: value of square root of 2M_SQRT1_2
: value of square root of 1/2
In an equation, any undeclared variables are automatically declared to be
doubles in the high-performance Cython code that is generated. In addition
one may declare a temporary variable to be a matrix
or a cPoint
by
writing:
vec, vec1 = declare("matrix(3)", 2)
mat = declare("matrix((3,3))")
i, j = declare('int')
When the Cython code is generated, this gets translated to:
cdef double vec[3], vec1[3]
cdef double mat[3][3]
cdef int i, j
One can also declare any valid c-type using the same approach, for example if
one desires a long
data type, one may use i = declare("long")
.
Note that the additional (optional) argument in the declare specifies the
number of variables. While this is ignored during transpilation, this is
useful when writing functions in pure Python, the
compyle.api.declare()
function provides a pure Python
implementation of this so that the code works both when compiled as well as
when run from pure Python. For example:
i, j = declare("int", 2)
In this case, the declare function call returns two integers so that the code runs correctly in pure Python also. The second argument is optional and defaults to 1. If we defined a matrix, then this returns two NumPy arrays of the appropriate shape.
>>> declare("matrix(2)", 2)
(array([ 0., 0.]), array([ 0., 0.]))
Thus the code one writes can be used in pure Python and can also be safely transpiled into other languages.
Writing the reduce method¶
One may also perform any reductions on properties. Consider a trivial example
of calculating the total mass and the maximum u
velocity in the following
equation:
class FindMaxU(Equation):
def reduce(self, dst, t, dt):
m = serial_reduce_array(dst.m, 'sum')
max_u = serial_reduce_array(dst.u, 'max')
dst.total_mass[0] = parallel_reduce_array(m, 'sum')
dst.max_u[0] = parallel_reduce_array(u, 'max')
where:
dst
: refers to a destinationParticleArray
.t, dt
: are the current time and timestep respectively.serial_reduce_array
: is a special function provided that performs reductions correctly in serial. It currently supportssum, prod, max
andmin
operations. Seepysph.base.reduce_array.serial_reduce_array()
. There is also apysph.base.reduce_array.parallel_reduce_array()
which is to be used to reduce an array across processors. Usingparallel_reduce_array
is expensive as it is an all-to-all communication. One can reduce these by using a single array and use that to reduce the communication.
We recommend that for any kind of reductions one always use the
serial_reduce_array
function and the parallel_reduce_array
inside a
reduce
method. One should not worry about parallel/serial modes in this
case as this is automatically taken care of by the code generator. In serial,
the parallel reduction does nothing.
With this machinery, we are able to write complex equations to solve almost any SPH problem. A user can easily define a new equation and instantiate the equation in the list of equations to be passed to the application. It is often easiest to look at the many existing equations in PySPH and learn the general patterns.
Adaptive timesteps¶
There are a couple of ways to use adaptive timesteps. The first is to compute
a required timestep directly per-particle in a particle array property called
dt_adapt
. The minimum value of this array across all particle arrays is
used to set the timestep directly. This is the easiest way to set the adaptive
timestep.
If the dt_adapt
parameter is not set one may also use standard velocity,
force, and viscosity based parameters. The integrator uses information from
the arrays dt_cfl
, dt_force
, and dt_visc
in each of the particle
arrays to determine the most suitable time step. This is done using the
following approach. The minimum smoothing parameter h
is found as
hmin
. Let the CFL number be given as cfl
. For the velocity criterion,
the maximum value of dt_cfl
is found and then a suitable timestep is found
as:
dt_min_vel = hmin/max(dt_cfl)
For the force based criterion we use the following:
dt_min_force = sqrt(hmin/sqrt(max(dt_force)))
for the viscosity we have:
dt_min_visc = hmin/max(dt_visc_fac)
Then the correct timestep is found as:
dt = cfl*min(dt_min_vel, dt_min_force, dt_min_visc)
The cfl
is set to 0.3 by default. One may pass --cfl
to the
application to change the CFL. Note that when the dt_adapt
property is
used the CFL has no effect as we assume that the user will compute a suitable
value based on their requirements.
The pysph.sph.integrator.Integrator
class code may be instructive
to look at if you are wondering about any particular details.
Illustration of the loop_all
method¶
The loop_all
is a powerful method we show how we can use the above to
perform what the loop
method usually does ourselves.
class LoopAllEquation(Equation):
def initialize(self, d_idx, d_rho):
d_rho[d_idx] = 0.0
def loop_all(self, d_idx, d_x, d_y, d_z, d_rho, d_h,
s_m, s_x, s_y, s_z, s_h,
SPH_KERNEL, NBRS, N_NBRS):
i = declare('int')
s_idx = declare('long')
xij = declare('matrix(3)')
rij = 0.0
sum = 0.0
for i in range(N_NBRS):
s_idx = NBRS[i]
xij[0] = d_x[d_idx] - s_x[s_idx]
xij[1] = d_y[d_idx] - s_y[s_idx]
xij[2] = d_z[d_idx] - s_z[s_idx]
rij = sqrt(xij[0]*xij[0] + xij[1]*xij[1] + xij[2]*xij[2])
sum += s_m[s_idx]*SPH_KERNEL.kernel(xij, rij, 0.5*(s_h[s_idx] + d_h[d_idx]))
d_rho[d_idx] += sum
This seems a bit complex but let us look at what is being done. initialize
is called once per particle and each of their densities is set to zero. Then
when loop_all
is called it is called once per destination particle (unlike
loop
which is called pairwise for each destination and source particle).
The loop_all
is passed arrays as is typical of most equations but is also
passed the SPH_KERNEL
itself, the list of neighbors, and the number of
neighbors.
The code first declares the variables, i, s_idx
as an integer and long,
and then x_ij
as a 3-element array. These are important for performance in
the generated code. The code then loops over all neighbors and computes the
summation density. Notice how the kernel is computed using
SPH_KERNEL.kernel(...)
. Notice also how the source index, s_idx
is found
from the neighbors.
This above loop_all
code does exactly what the following single line of
code does.
def loop(self, d_idx, d_rho, s_m, s_idx, WIJ):
d_rho[d_idx] += s_m[s_idx]*WIJ
However, loop
is only called pairwise and there are times when we want to
do more with the neighbors. For example if we wish to setup a matrix and solve
it per particle, we could do it in loop_all
efficiently. This is also very
useful for non-pairwise interactions which are common in other particle
methods like molecular dynamics.
Calling user-defined functions from equations¶
Sometimes we may want to call a user-defined function from the equations. Any pure Python function defined using the same conventions as listed above (with suitable type hints) can be called from the equations. Here is a simple example from one of the tests in PySPH.
def helper(x=1.0):
return x*1.5
class SillyEquation(Equation):
def initialize(self, d_idx, d_au, d_m):
d_au[d_idx] += helper(d_m[d_idx])
def _get_helpers_(self):
return [helper]
Notice that initialize
is calling the helper
function defined above.
The helper function has a default argument to indicate to our code generation
that x is a floating point number. We could have also set the default argument
to a list and this would then be passed an array of values. The
_get_helpers_
method returns a list of functions and these functions are
automatically transpiled into high-performance C or OpenCL/CUDA code and can
be called from your equations.
Here is a more complex helper function.
def trace(x=[1.0, 1.0], nx=1):
i = declare('int')
result = 0.0
for i in range(nx):
result += x[i]
return result
class SillyEquation(Equation):
def loop(self, d_idx, d_au, d_m, XIJ):
d_au[d_idx] += trace(XIJ, 3)
def _get_helpers_(self):
return [trace]
The trace function effectively is converted into a function with signature
double trace(double* x, int nx)
and thus can be called with any
one-dimensional array.
Calling arbitrary Python functions from a Group¶
Sometimes, you may need to implement something that is hard to write (at least
initially) with the constraints that PySPH places. For example if you need to
implement an algorithm that requires more complex data structures and you want
to do it easily in Python. There are ways to call arbitrary Python code from
the application already but sometimes you need to do this during every
acceleration evaluation. To support this the Group
class supports
two additional keyword arguments called pre
and post
. These can be any
Python callable that take no arguments. Any callable passed as pre
will be
called before any equation related code is executed and post
will be
executed after the entire group is finished. If the group is iterated, it
should call those functions repeatedly.
Now these functions are pure Python functions so you may choose to do anything in them. These are not called within an OpenMP context and if you are using the OpenCL or CUDA backends again this will simply be a Python function call that has nothing to do with the particular backend. However, since it is arbitrary Python, you can choose to implement the code using any approach you choose to do. This should be flexible enough to customize PySPH greatly.
Conditional execution of groups¶
A Group
takes a keyword argument called condition
which can be
any Python callable (function/method). This callable is passed the values of
t, dt
. If the function returns True
then the group is executed,
otherwise it is not. This is useful in situations when you say want to run a
specific set of equations only every 20 iterations. Or you want to move an
object only at a specified time. Here is a quick pseudo-example, we define the
Group
as below:
def check_time(t, dt):
if int(t/dt) % 20 == 0:
return True
else:
return False
equations = [
Group( ... ),
Group(
equations=[
ShepardDensityFilter(dest='fluid', sources=['fluid'])
],
condition=check_time
)
]
In the above pseudo-code the idea is that only when the check_time
returns
True
will the density filter group be executed. You can also pass a method
instead of a function. The only condition is that it should accept the two
arguments t, dt
.
Controlling the looping over destination particles¶
Sometimes (this is pretty rare) you may want to only iterate over a subset of the particles. Usually, the iterations over the destinations are performed roughly like the following pure Python code:
for d_idx in range(n_destinations):
# ...
The Group
class also takes a start_idx
argument which defaults
to 0 and a stop_idx
which defaults to the total number of particles. One
can pass either a number or a string with a property/constant whose first item
will be used as the number. For example you could have this:
Group(
equations=[
SimpleEquation(dest='fluid', sources=['fluid'])
],
start_idx=10, stop_idx=20
)
This would iterate from the index number 10 to the index number 19, i.e.
similar to using a range(10, 20)
. You could also do:
Group(
equations=[
SimpleEquation(dest='fluid', sources=['fluid'])
],
stop_idx='n_body'
)
Where 'n_body'
is a constant available in the destination particle array.
Another instance where this could be useful is when you want to run an
equation only on the ghost particles, i.e. when real
is False. In this
case, let us say there are 5000 real particles, we could simply pass
start_idx=5000, real=False
and it will only iterate over the non-real
particles.
Writing integrators¶
Similar rules apply when writing an IntegratorStep
. One can create
a multi-stage integrator as follows:
class MyStepper(IntegratorStep):
def initialize(self, d_idx, d_x):
# ...
def py_stage1(self, dst, t, dt):
# ...
def stage1(self, d_idx, d_x, d_ax):
# ...
def py_stage2(self, dst, t, dt):
# ...
def stage2(self, d_idx, d_x, d_ax):
# ...
In this case, the initialize, stage1, stage2
, methods are transpiled and
called but the py_stage1, py_stage2
are pure Python functions called
before the respective stage
functions are called. Defining the
py_stage1
or py_stage2
methods are optional. If you have defined them,
they will be called automatically. They are passed the destination particle
array, the current time, and current timestep.
Different equations for different stages¶
By default, when one creates equations the implicit assumption is that the
same right-hand-side is evaluated at each stage of the integrator. However,
some schemes require that one solve different equations for different
integrator stages. PySPH does support this but to do this when one creates
equations in the application, one should return an instance of
pysph.sph.equation.MultiStageEquations
. For example:
def create_equations(self):
# ...
eqs = [
[Eq1(dest='fluid', sources=['fluid'])],
[Eq2(dest='fluid', sources=['fluid'])]
]
from pysph.sph.equation import MultiStageEquations
return MultiStageEquations(eqs)
In the above, note that each element of eqs
is a list, it could have also
been a group. Each item of the given equations is treated as a separate
collection of equations which is to be used. The use of the
pysph.sph.equation.MultiStageEquations
tells PySPH that multiple
equation sets are being used.
Now that we have this, how do we call the right accelerations at the right
times? We do this by sub-classing the
pysph.sph.integrator.Integrator
. We show a simple example from our
test suite to illustrate this:
from pysph.sph.integrator import Integrator
class MyIntegrator(Integrator):
def one_timestep(self, t, dt):
self.compute_accelerations(0)
# Equivalent to self.compute_accelerations()
self.stage1()
self.do_post_stage(dt, 1)
self.compute_accelerations(1, update_nnps=False)
self.stage2()
self.update_domain()
self.do_post_stage(dt, 2)
Note that the compute_accelerations
method takes two arguments, the
index
(which defaults to zero) and update_nnps
which defaults to
True
. A simple integrator with a single RHS would simply call
self.compute_accelerations()
. However, in the above, the first set of
equations is called first, and then for the second stage the second set of
equations is evaluated but without updating the NNPS (handy if the particles
do not move in stage1). Note the call self.update_domain()
after the
second stage, this sets up any ghost particles for periodicity when particles
have been moved, it also updates the neighbor finder to use an appropriate
neighbor length based on the current smoothing length. If you do not need to
do this for your particular integrator you may choose not to add this. In the
above case, the domain is not updated after the first stage as the particles
have not moved.
The above illustrates how one can create more complex integrators that employ different accelerations in each stage.
Examples to study¶
The following equations provide good examples for how one could use/write the
reduce
method:
pysph.sph.gas_dynamics.basic.SummationDensityADKE
: relatively simple.pysph.sph.rigid_body.RigidBodyMoments
: this is pretty complex.pysph.sph.iisph.PressureSolve
: relatively straight-forward.
The equations that demonstrate the converged
method are:
pysph.sph.gas_dynamics.basic.SummationDensity
: relatively simple.pysph.sph.iisph.PressureSolve
.
Some equations that demonstrate using matrices and solving systems of equations are:
Writing inlet oulet manager¶
This section discusses writing your own Inlet Outlet Manager (IOM). If you want to use the existing IOM subclass present in PySPH see Flow past a circular cylinder using open boundary conditions. The IOM manages all the inputs required to simulate the open boundaries in PySPH. It has the following functions:
- Create ghost particles
- Create inlet/outlet stepper
- Creation of inlet/outlet equations
- Creation of inlet/outlet particle updater
Overview¶
The brief overview of InletOutletManager
subclass:
class MyIOM(InletOutletManager):
def __init__(self, fluid_arrays, inletinfo, outletinfo,
extraeqns=None):
# Create the object to manage inlet outlet boundary conditions.
# Most of the variables are evaluated after the scheme and particles
# are created after application.consume_user_options runs.
def create_ghost(self, pa_arr, inlet=True):
# Creates ghosts for the given inlet/outlet particles
# return ghost_pa (the ghost particle array for the pa_arr)
def update_dx(self, dx):
# Update the discretization length
def add_io_properties(self, pa, scheme=None):
# Add properties to be used in inlet/outlet equations
# return the list of properties
def get_io_names(self, ghost=False):
# Return all the names of inlets and outlets
def get_stepper(self, scheme, integrator, **kw):
# Returns the steppers for inlet/outlet
def setup_iom(self, dim, kernel):
# User data in application.consume_user_options are passed
def get_equations(self, scheme, **kw):
# Returns the equations for inlet/outlet
def get_equations_post_compute_acceleration(self):
# Returns the equations for inlet/outlet used post acceleration
# computation
def get_inlet_outlet(self, particle_array):
# Returns list of `Inlet` and `Outlet` instances which
# updates inlet particles to fluid and fluid particles to outlet.
# This also creates new inlet particle and consume outlet particles.
- The IOM gets initialized in the
configure_scheme
method in theApplication
instance. - The IOM is initialized using the list of fluid particle array
fluid_arrays
, andinlet_info
andoutlet_info
instances ofInletInfo
andOutletInfo
, respectively. These info class contains the information of inlet/outlet like direction, size etc.
To explain the inlet outlet manager in detail, let us consider the mirror boundary implemented using IOM class in simple_inlet_outlet.py for EDACScheme:
class EDACScheme(Scheme):
def __init__(self, fluids, solids, dim, c0, nu, rho0, pb=0.0,
gx=0.0, gy=0.0, gz=0.0, tdamp=0.0, eps=0.0, h=0.0,
edac_alpha=0.5, alpha=0.0, bql=True, clamp_p=False,
inlet_outlet_manager=None, inviscid_solids=None):
...
self.inlet_outlet_manager = inlet_outlet_manager
...
def configure_solver(self, kernel=None, integrator_cls=None,
extra_steppers=None, **kw):
...
iom = self.inlet_outlet_manager
if iom is not None:
iom_stepper = iom.get_stepper(self, cls, self.use_tvf)
for name in iom_stepper:
steppers[name] = iom_stepper[name]
...
if iom is not None:
iom.setup_iom(dim=self.dim, kernel=kernel)
def setup_properties(self, particles, clean=True):
...
iom = self.inlet_outlet_manager
fluids_with_io = self.fluids
if iom is not None:
io_particles = iom.get_io_names(ghost=True)
fluids_with_io = self.fluids + io_particles
for fluid in fluids_with_io:
...
if iom is not None:
iom.add_io_properties(pa, self)
...
def create_equations(self):
...
return self._get_internal_flow_equations()
def _get_internal_flow_equations(self):
...
iom = self.inlet_outlet_manager
fluids_with_io = self.fluids
if iom is not None:
fluids_with_io = self.fluids + iom.get_io_names()
equations = []
if iom is not None:
io_eqns = iom.get_equations(self, self.use_tvf)
for grp in io_eqns:
equations.append(grp)
...
if iom is not None:
io_eqns = iom.get_equations_post_compute_acceleration()
for grp in io_eqns:
equations.append(grp)
return equations
- The additional properties can be added in the function
add_io_properties
which is called in the functionsetup_properties
of aScheme
instance. - The
get_stepper
function passes the appropriate stepper for the inlet and outlet in theconfigure_solver
method of theScheme
instance. - The
get_equations
andget_equations_post_compute_acceleration
provides the additional equations to be used to interpolate properties from fluid particle arrays. This is to be called increate_equations
method of theScheme
instance. - Any additional data required from the
Application
orScheme
instance can be passed to the IOM usingsetup_iom
method.
Additionally, in the Application
instance:
- The
get_inlet_outlet
methods provides the instances for theInlet
andOutlet
which updates the particles when they cross the interface. This method is called increate_inlet_outlet
method of theApplication
instance. - In mirror type inlet-outlet a ghost layer of particles is required which is a
mere reflection about the inlet/outlet-fluid interface. It is created in
create_particles
usingcreate_ghost
.
The IOM enables the management of the above steps easy to handle. An example showing the usage of IOM is the flow_past_cylinder_2d.py.
Note
The IOM is a convenience to manage various attributes of inlet/outlet
implementation in PySPH but all this is not automatic. The user has to take
care of appropriate invocation of the methods in the IOM in
Application
and Scheme
instances.
Using StarCluster with PySPH¶
StarCluster is an open source cluster-computing toolkit for Amazon’s Elastic Compute Cloud (EC2). StarCluster has been designed to simplify the process of building, configuring, and managing clusters of virtual machines on Amazon’s EC2 cloud.
Using StarCluster along with PySPH’s MPI support, you can run PySPH code on multiple instances in parallel and complete simulations faster.
Configuring StarCluster¶
Creating Configuration File¶
After StarCluster has been installed, the next step is to update your StarCluster configuration
$ starcluster help
StarCluster - (http://star.mit.edu/cluster)
Software Tools for Academics and Researchers (STAR)
Please submit bug reports to starcluster@mit.edu
cli.py:87 - ERROR - config file /home/user/.starcluster/config does not exist
Options:
--------
[1] Show the StarCluster config template
[2] Write config template to /home/user/.starcluster/config
[q] Quit
Please enter your selection:
Select the second option by typing 2 and press enter. This will give you a template to use to create a configuration file containing your AWS credentials, cluster settings, etc. The next step is to customize this file using your favorite text-editor
$ emacs ~/.starcluster/config
Updating AWS Credentials¶
This file is commented with example “cluster templates”. A cluster template defines a set of configuration settings used to start a new cluster. The config template provides a smallcluster template that is ready to go out-of-the-box. However, first, you must fill in your AWS credentials and keypair info
[aws info]
aws_access_key_id = # your aws access key id here
aws_secret_access_key = # your secret aws access key here
aws_user_id = # your 12-digit aws user id here
To find your AWS User ID, see Finding your Account Canonical User ID
You can get your root user credentials from the Security Credentials page on AWS
Management Console. However, root credentials allow for full access to all
resources on your account and it is recommended that you create separate IAM
(Identity and Access Management) user credentials for managing access to your
EC2 resources. To create IAM user credentials, see Creating IAM Users
(Console)
For StarCluster, create an IAM user with the EC2 Full Access
permission.
If you don’t already have a keypair, you can generate one using StarCluster by running:
$ starcluster createkey mykey -o ~/.ssh/mykey.rsa
This will create a keypair called mykey on Amazon EC2 and save the private key to ~/.ssh/mykey.rsa. Once you have a key the next step is to fill in your keypair info in the StarCluster config file
[key mykey]
key_location = ~/.ssh/mykey.rsa
Also, update the following information for the smallcluster configuration:
[cluster smallcluster]
..
KEYNAME = mykey
..
Now that the basic configuration for StarCluster is complete, you can directly launch instances using StarCluster. However, note that EC2 charges are not pro rata and you will be charged for an entire hour even if you run an instance for a few minutes. Before attempting to deploy an instance/cluster you can modify the following information in your cluster configuration:
[cluster smallcluster]
..
NODE_INSTANCE_TYPE=t2.micro
NODE_IMAGE_ID=ami-6b211202
..
Now you can launch an EC2 instance using:
$ starcluster start smallcluster
You can SSH into the master node by running:
$ starcluster sshmaster smallcluster
You can transfer files to the nodes using the get
and put
commands as:
$ starcluster put /path/to/local/file/or/dir /remote/path/
$ starcluster get /path/to/remote/file/or/dir /local/path/
Finally, you can terminate the instance by running:
$ starcluster terminate smallcluster
Setting up PySPH for StarCluster¶
Most of the public AMIs currently distributed for StarCluster are outdated and
have reached their end of life. To ensure a hassle-free experience while
further extending the AMI and installing packages, you can use the 64 bit
Ubuntu 16.04 AMI with AMI ID ami-01fdc27a
which has most StarCluster
dependencies and PySPH dependencies installed.
Base AMI for PySPH [Optional]¶
The ami.sh
file which can be found in the starcluster
directory in the
PySPH repository automatically launches a vanilla 64-bit Ubuntu 16.04 instance,
installs any necessary StarCluster and PySPH dependencies and saves an AMI with
this configuration on your AWS account
$ ./ami.sh
The AMI ID of the generated image is stored in AMI_ID
. You can also see a
list of the AMIs currently in your AWS account by running
$ starcluster listimages
Cluster configuration for PySPH¶
Modify your StarCluster configuration file with the following
information. Launching a cluster with the following configuration will start 2
t2.micro instances, install the latest version of PySPH in each and keep track
of the nodes loaded in /home/pysph/PYSPH_HOSTS
:
[cluster pysphcluster]
KEYNAME = mykey
CLUSTER_SIZE = 2 # Number of nodes in cluster
CLUSTER_USER = pysph
CLUSTER_SHELL = bash
NODE_IMAGE_ID = ami-01fdc27a # Or AMI ID for base AMI generated previously
NODE_INSTANCE_TYPE = t2.micro # EC2 Instance type
PLUGINS = pysph_install
[plugin pysph_install]
setup_class = sc_pysph.PySPHInstaller
Also, copy sc_pysph.py
from the starcluster
directory to
~/.starcluster/plugins/
Running PySPH scripts on a cluster¶
You can start the cluster configured previously by running
$ starcluster start -c pysphcluster cluster
Assuming your PySPH file cube.py
is in the local home directory, you can
first transfer this file to the cluster:
$ starcluster put -u pysph cluster ~/cube.py /home/pysph/cube.py
Then run the PySPH code as:
$ starcluster sshmaster -u pysph cluster "mpirun -n 2 --hostfile ~/PYSPH_HOSTS python ~/cube.py"
Finally, you can get the output generated by PySPH back by running:
$ starcluster get -u pysph cluster /home/pysph/cube_output .
Using the PySPH library¶
In this document, we describe the fundamental data structures for working with particles in PySPH. Take a look at A more detailed tutorial for a tutorial introduction to some of the examples. For the experienced user, take a look at The PySPH framework for some of the internal code-generation details and if you want to extend PySPH for your application.
Working With Particles¶
As an object oriented framework for particle methods, PySPH provides convenient data structures to store and manipulate collections of particles. These can be constructed from within Python and are fully compatible with NumPy arrays. We begin with a brief description for the basic data structures for arrays.
C-arrays¶
The cyarray.carray.BaseArray
class provides a typed array data
structure called CArray. These are used throughout PySPH and are
fundamentally very similar to NumPy arrays. The following named types are
supported:
cyarray.carray.UIntArray
(32 bit unsigned integers)cyarray.carray.IntArray
(32 bit signed integers)cyarray.carray.LongArray
(64 bit signed integers)cyarray.carray.DoubleArray
(64 bit floating point numbers
Some simple commands to work with BaseArrays from the interactive shell are given below
>>> import numpy
>>> from cyarray.carray import DoubleArray
>>> array = DoubleArray(10) # array of doubles of length 10
>>> array.set_data( numpy.arange(10) ) # set the data from a NumPy array
>>> array.get(3) # get the value at a given index
>>> array.set(5, -1.0) # set the value at an index to a value
>>> array[3] # standard indexing
>>> array[5] = -1.0 # standard indexing
ParticleArray¶
In PySPH, a collection of BaseArrays make up what is called a
ParticleArray
. This is the main data structure that is used to
represent particles and can be created from NumPy arrays like so:
>>> import numpy
>>> from pysph.base.utils import get_particle_array
>>> x, y = numpy.mgrid[0:1:0.1, 0:1:0.1] # create some data
>>> x = x.ravel(); y = y.ravel() # flatten the arrays
>>> pa = get_particle_array(name='array', x=x, y=y) # create the particle array
In the above, the helper function
pysph.base.utils.get_particle_array()
will instantiate and return a
ParticleArray
with properties x and y set from given NumPy
arrays. In general, a ParticleArray
can be instantiated with an
arbitrary number of properties. Each property is stored internally as a
cyarray.carray.BaseArray
of the appropriate type.
By default, every ParticleArray
returned using the helper
function will have the following properties:
- x, y, z : Position coordinates (doubles)
- u, v, w : Velocity (doubles)
- h, m, rho : Smoothing length, mass and density (doubles)
- au, av, aw: Accelerations (doubles)
- p : Pressure (doubles)
- gid : Unique global index (unsigned int)
- pid : Processor id (int)
- tag : Tag (int)
The role of the particle properties like positions, velocities and other variables should be clear. These define either the kinematic or dynamic properties associated with SPH particles in a simulation.
In addition to scalar properties, particle arrays also support “strided” properties i.e. associating multiple elements per particle. For example:
>>> pa.add_property('A', data=2.0, stride=2)
>>> pa.A
This will add a new property with name 'A'
but which has 2 elements
associated with each particle. When one adds/remove particles this is taken
into account automatically. When accessing such a particle, one has to be
careful though as the underlying array is still stored as a one-dimensional
array.
PySPH introduces a global identifier for a particle which is required to be unique for that particle. This is represented with the property gid which is of type unsigned int. This property is used in the parallel load balancing algorithm with Zoltan.
The property pid for a particle is an integer that is used to identify the processor to which the particle is currently assigned.
The property tag is an integer that is used for any other identification. For example, we might want to mark all boundary particles with the tag 100. Using this property, we can delete all such particles as
>>> pa.remove_tagged_particles(tag=100)
This gives us a very flexible way to work with particles. Another way
of deleting/extracting particles is by providing the indices (as a
list, NumPy array or a LongArray
) of the particles to
be removed:
>>> indices = [1,3,5,7]
>>> pa.remove_particles( indices )
>>> extracted = pa.extract_particles(indices, props=['rho', 'x', 'y'])
A ParticleArray
can be concatenated with another array to
result in a larger array:
>>> pa.append_parray(another_array)
To set a given list of properties to zero:
>>> props = ['au', 'av', 'aw']
>>> pa.set_to_zero(props)
Properties in a particle array are automatically sized depending on the number
of particles. There are times when fixed size properties are required. For
example if the total mass or total force on a particle array needs to be
calculated, a fixed size constant can be added. This can be done by adding a
constant
to the array as illustrated below:
>>> pa.add_constant('total_mass', 0.0)
>>> pa.add_constant('total_force', [0.0, 0.0, 0.0])
>>> print(pa.total_mass, pa.total_force)
In the above, the total_mass
is a fixed DoubleArray
of length 1 and
the total_force
is a fixed DoubleArray
of length 3. These constants
will never be resized as one adds or removes particles to/from the particle
array. The constants may be used inside of SPH equations just like any other
property.
The constants can also set in the constructor of the ParticleArray
by passing a dictionary of constants as a constants
keyword argument. For
example:
>>> pa = ParticleArray(
... name='test', x=x,
... constants=dict(total_mass=0.0, total_force=[0.0, 0.0, 0.0])
... )
Take a look at ParticleArray
reference documentation for
some of the other methods and their uses.
Nearest Neighbour Particle Searching (NNPS)¶
To carry out pairwise interactions for SPH, we need to find the nearest
neighbours for a given particle within a specified interaction radius. The
NNPS
object is responsible for handling these nearest neighbour
queries for a list of particle arrays:
>>> from pysph.base import nnps
>>> pa1 = get_particle_array(...) # create one particle array
>>> pa2 = get_particle_array(...) # create another particle array
>>> particles = [pa1, pa2]
>>> nps = nnps.LinkedListNNPS(dim=3, particles=particles, radius_scale=3)
The above will create an NNPS
object that uses the classical
linked-list algorithm for nearest neighbour searches. The radius of
interaction is determined by the argument radius_scale. The book-keeping
cells have a length of \(\text{radius_scale} \times h_{\text{max}}\),
where \(h_{\text{max}}\) is the maximum smoothing length of all
particles assigned to the local processor.
Note that the NNPS
classes also support caching the neighbors
computed. This is useful if one needs to reuse the same set of
neighbors. To enable this, simply pass cache=True
to the
constructor:
>>> nps = nnps.LinkedListNNPS(dim=3, particles=particles, cache=True)
Since we allow a list of particle arrays, we need to distinguish between source and destination particle arrays in the neighbor queries.
Note
A destination particle is a particle belonging to that species for which the neighbors are sought.
A source particle is a particle belonging to that species which contributes to a given destination particle.
With these definitions, we can query for nearest neighbors like so:
>>> nbrs = UIntArray()
>>> nps.get_nearest_particles(src_index, dst_index, d_idx, nbrs)
where src_index, dst_index and d_idx are integers. This will
return, for the d_idx particle of the dst_index particle array
(species), nearest neighbors from the src_index particle array
(species). Passing the src_index and dst_index every time is
repetitive so an alternative API is to call set_context
as done
below:
>>> nps.set_context(src_index=0, dst_index=0)
If the NNPS
instance is configured to use caching, then it will also
pre-compute the neighbors very efficiently. Once the context is set one
can get the neighbors as:
>>> nps.get_nearest_neighbors(d_idx, nbrs)
Where d_idx and nbrs are as discussed above.
If we want to re-compute the data structure for a new distribution of
particles, we can call the NNPS.update()
method:
>>> nps.update()
Periodic domains¶
The constructor for the NNPS
accepts an optional argument
(DomainManager
) that is used to delimit the maximum
spatial extent of the simulation domain. Additionally, this argument
is also used to indicate the extents for a periodic domain. We
construct a DomainManager
object like so
>>> from pysph.base.nnps import DomainManager
>>> domain = DomainManager(xmin, xmax, ymin, ymax, zmin, zmax,
periodic_in_x=True, periodic_in_y=True,
periodic_in_z=False)
where xmin … zmax are floating point arguments delimiting the simulation domain and periodic_in_x,y,z are bools defining the periodic axes.
When the NNPS
object is constructed with this
DomainManager
, care is taken to create periodic ghosts for
particles in the vicinity of the periodic boundaries. These ghost
particles are given a special tag defined by
ParticleTAGS
class ParticleTAGS:
Local = 0
Remote = 1
Ghost = 2
Note
The Local tag is used to for ordinary particles assigned and owned by a given processor. This is the default tag for all particles.
Note
The Remote tag is used for ordinary particles assigned to but not owned by a given processor. Particles with this tag are typically used to satisfy neighbor queries across processor boundaries in a parallel simulation.
Note
The Ghost tag is used for particles that are created to satisfy boundary conditions locally.
Particle aligning¶
In PySPH, the ParticleArray
aligns all particles upon a
call to the ParticleArray.align_particles()
method. The
aligning is done so that all particles with the Local tag are placed
first, followed by particles with other tags.
There is no preference given to the tags other than the fact that a particle with a non-zero tag is placed after all particles with a zero (Local) tag. Intuitively, the local particles represent real particles or particles that we want to do active computation on (destination particles).
The data attribute ParticleArray.num_real_particles returns the
number of real or Local particles. The total number of particles in
a given ParticleArray
can be obtained by a call to the
ParticleArray.get_number_of_particles()
method.
The following is a simple example demonstrating this default behaviour of PySPH:
>>> x = numpy.array( [0, 1, 2, 3], dtype=numpy.float64 )
>>> tag = numpy.array( [0, 2, 0, 1], dtype=numpy.int32 )
>>> pa = utils.get_particle_array(x=x, tag=tag)
>>> print(pa.get_number_of_particles()) # total number of particles
>>> 4
>>> print(pa.num_real_particles) # no. of particles with tag 0
>>> 2
>>> x, tag = pa.get('x', 'tag', only_real_particles=True) # get only real particles (tag == 0)
>>> print(x)
>>> [0. 2.]
>>> print(tag)
>>> [0 0]
>>> x, tag = pa.get('x', 'tag', only_real_particles=False) # get all particles
>>> print(x)
>>> [0. 2. 1. 3.]
>>> print(tag)
>>> [0 0 2 1]
We are now in a position to put all these ideas together and write our first SPH application.
Parallel NNPS with PyZoltan¶
PySPH uses the Zoltan data management library for dynamic load
balancing through a Python wrapper PyZoltan
, which
provides functionality for parallel neighbor queries in a manner
completely analogous to NNPS
.
Particle data is managed and exchanged in parallel via a derivative of
the abstract base class ParallelManager
object. Continuing
with our example, we can instantiate a
ZoltanParallelManagerGeometric
object as:
>>> ... # create particles
>>> from pysph.parallel import ZoltanParallelManagerGeometric
>>> pm = ZoltanParallelManagerGeometric(dim, particles, comm, radius_scale, lb_method)
The constructor for the parallel manager is quite similar to the
NNPS
constructor, with two additional parameters, comm
and lb_method. The first is the MPI communicator object and the
latter is the partitioning algorithm requested. The following
geometric load balancing algorithms are supported:
The particle distribution can be updated in parallel by a call to the
ParallelManager.update()
method. Particles across processor
boundaries that are needed for neighbor queries are assigned the tag
Remote as shown in the figure:

Local and remote particles in the vicinity of a processor boundary (dashed line)
Putting it together: A simple example¶
Now that we know how to work with particles, we will use the data structures to carry out the simplest SPH operation, namely, the estimation of particle density from a given distribution of particles.
We consider particles distributed on a uniform Cartesian lattice ( \(\Delta x = \Delta y = \Delta\)) in a doubly periodic domain \([0,1]\times[0,1]\).
The particle mass is set equal to the “volume” \(\Delta^2\) associated with each particle and the smoothing length is taken as \(1.3\times \Delta\). With this initialization, we have for the estimation for the particle density
We will use the CubicSpline
kernel, defined in
pysph.base.kernels module. The code to set-up the particle
distribution is given below
# PySPH imports
from cyarray.carray import UIntArray
from pysph.base.utils import utils
from pysph.base.kernels import CubicSpline
from pysph.base.nnps import DomainManager
from pysph.base.nnps import LinkedListNNPS
# NumPy
import numpy
# Create a particle distribution
dx = 0.01; dxb2 = 0.5 * dx
x, y = numpy.mgrid[dxb2:1:dx, dxb2:1:dx]
x = x.ravel(); y = y.ravel()
h = numpy.ones_like(x) * 1.3*dx
m = numpy.ones_like(x) * dx*dx
# Create the particle array
pa = utils.get_particle_array(x=x,y=y,h=h,m=m)
# Create the periodic DomainManager object and NNPS
domain = DomainManager(xmin=0., xmax=1., ymin=0., ymax=1., periodic_in_x=True, periodic_in_y=True)
nps = LinkedListNNPS(dim=2, particles=[pa,], radius_scale=2.0, domain=domain)
# The SPH kernel. The dimension argument is needed for the correct normalization constant
k = CubicSpline(dim=2)
Note
Notice that the particles were created with an offset of
\(\frac{\Delta}{2}\). This is required since the
NNPS
object will box-wrap particles near periodic
boundaries.
The NNPS
object will create periodic ghosts for the
particles along each periodic axis.
The ghost particles are assigned the tag value 2. For this example, periodic ghosts are created along each coordinate direction as shown in the figure.
SPH Kernels¶
Pairwise interactions in SPH are weighted by the kernel \(W_{ab}\). In PySPH, the pysph.base.kernels module provides a Python interface for these terms. The general definition for an SPH kernel is of the form:
class Kernel(object):
def __init__(self, dim=1):
self.radius_scale = 2.0
self.dim = dim
def kernel(self, xij=[0., 0, 0], rij=1.0, h=1.0):
...
return wij
def gradient(self, xij=[0., 0, 0], rij=1.0, h=1.0, grad=[0, 0, 0]):
...
grad[0] = dwij_x
grad[1] = dwij_y
grad[2] = dwij_z
The kernel is an object with two methods kernel and gradient. \(\text{xij}\) is the difference vector between the destination and source particle \(\boldsymbol{x}_{\text{i}} - \boldsymbol{x}_{\text{j}}\) with \(\text{rij} = \sqrt{ \boldsymbol{x}_{ij}^2}\). The gradient method accepts an additional argument that upon exit is populated with the kernel gradient values.
Density summation¶
In the final part of the code, we iterate over all target or destination particles and compute the density contributions from neighboring particles:
nbrs = UIntArray() # array for neighbors
x, y, h, m = pa.get('x', 'y', 'h', 'm', only_real_particles=False) # source particles will include ghosts
for i in range( pa.num_real_particles ): # iterate over all local particles
xi = x[i]; yi = y[i]; hi = h[i]
nps.get_nearest_particles(0, 0, i, nbrs) # get neighbors
neighbors = nbrs.get_npy_array() # numpy array of neighbors
rho = 0.0
for j in neighbors: # iterate over each neighbor
xij = xi - x[j] # interaction terms
yij = yi - y[j]
rij = numpy.sqrt( xij**2 + yij**2 )
hij = 0.5 * (h[i] + h[j])
wij = k.kernel( [xij, yij, 0.0], rij, hij) # kernel interaction
rho += m[j] * wij
pa.rho[i] = rho # contribution for this destination
The average density computed in this manner can be verified as \(\rho_{\text{avg}} = 0.99994676895585222\).
Summary¶
In this document, we introduced the most fundamental data structures in PySPH for working with particles. With these data structures, PySPH can be used as a library for managing particles for your application.
If you are interested in the PySPH framework and want to try out some examples, check out A more detailed tutorial.
Contribute to docs¶
How to build the docs locally¶
To build the docs, clone the repository:
$ git clone https://github.com/pypr/pysph
Make sure to work in an pysph
environment. I will proceed with the further
instructions assuming that the repository is cloned in home directory. Change to
the docs
directory and run make html
.
$ cd ~/pysph/docs/
$ make html
Possible error one might get is:
$ sphinx-build: Command not found
Which means you don’t a have sphinx-build in your system. To install across the system do:
$ sudo apt-get install python3-sphinx
or to install in an environment locally do:
$ pip install sphinx
run make html
again. The documentation is built locally at
~/pysph/docs/build/html
directory. Open `index.html
file by running
$ cd ~/pysph/docs/build/html
$ xdg-open index.html
How to add the documentation¶
As a starting point one can add documentation to one of the examples in
~/pysph/pysph/examples
folder. There is a dedicated
~/pysph/docs/source/examples
directory to add documentation to examples.
Choose an example to write documentation for,
$ cd ~/pysph/docs/source/examples
$ touch your_example.rst
We will write all the documentation in rst
file format. The index.rst
file in the examples directory should know about our newly created file, add a
reference next to the last written example.:
* :ref:`Some_example`:
* :ref:`Other_example`:
* :ref:`taylor_green`: the Taylor-Green Vortex problem in 2D.
* :ref:`sphere_in_vessel`: A sphere floating in a hydrostatic tank example.
* :ref:`your_example_file`: Description of the example.
and at the top of the example file add the reference, for example in
your_example_file.rst
, you should add,:
.. _your_example_file
That’s it, add the documentation and send a pull request.
Gallery of PySPH examples¶
Gallery of PySPH examples¶
In the following, several PySPH examples are documented. These serve to illustrate various features of PySPH and show one may use PySPH to solve a variety of problems.
The Taylor-Green Vortex¶
This example solves the classic Taylor-Green Vortex problem in two-dimensions. To run it one may do:
$ pysph run taylor_green
There are many command line options that this example provides, check them out with:
$ pysph run taylor_green -h
The example source can be seen at taylor_green.py.
This example demonstrates several useful features:
- user defined command line arguments and how they can be used.
- running the problem with multiple schemes.
- periodicity in both dimensions.
- post processing of generated data.
- using the
pysph.tools.sph_evaluator.SPHEvaluator
class for post-processing.
We discuss each of these below.
User command line arguments¶
The user defined command line arguments are easy to add. The following code snippet demonstrates how one adds this.
class TaylorGreen(Application):
def add_user_options(self, group):
group.add_argument(
"--init", action="store", type=str, default=None,
help="Initialize particle positions from given file."
)
group.add_argument(
"--perturb", action="store", type=float, dest="perturb", default=0,
help="Random perturbation of initial particles as a fraction "\
"of dx (setting it to zero disables it, the default)."
)
# ...
This code is straight-forward Python code to add options using the argparse
API. It is important to
note that the options are then available in the application’s options
attribute and can be accessed as self.options
from the application’s
methods. The consume_user_options
method highlights this.
def consume_user_options(self):
nx = self.options.nx
re = self.options.re
self.nu = nu = U*L/re
# ...
This method is called after the command line arguments are passed. To refresh
your memory on the order of invocation of the various methods of the
application, see the documentation of the
pysph.solver.application.Application
class. This shows that once
the application is run using the run
method, the command line arguments
are parsed and the following methods are called (this means that at this
point, the application has a valid self.options
):
consume_user_options()
configure_scheme()
The configure_scheme
is important as this example allows the user to
change the Reynolds number which changes the viscosity as well as the
resolution via --nx
and --hdx
. The code for the configuration looks like:
def configure_scheme(self):
scheme = self.scheme
h0 = self.hdx * self.dx
if self.options.scheme == 'tvf':
scheme.configure(pb=self.options.pb_factor*p0, nu=self.nu, h0=h0)
elif self.options.scheme == 'wcsph':
scheme.configure(hdx=self.hdx, nu=self.nu, h0=h0)
elif self.options.scheme == 'edac':
scheme.configure(h=h0, nu=self.nu, pb=self.options.pb_factor*p0)
kernel = QuinticSpline(dim=2)
scheme.configure_solver(kernel=kernel, tf=self.tf, dt=self.dt)
Note the use of the self.options.scheme
and the use of the
scheme.configure
method. Furthermore, the method also calls the scheme’s
configure_solver
method.
Using multiple schemes¶
This is relatively easy, this is achieved by using the
pysph.sph.scheme.SchemeChooser
scheme as follows:
def create_scheme(self):
wcsph = WCSPHScheme(
['fluid'], [], dim=2, rho0=rho0, c0=c0, h0=h0,
hdx=hdx, nu=None, gamma=7.0, alpha=0.0, beta=0.0
)
tvf = TVFScheme(
['fluid'], [], dim=2, rho0=rho0, c0=c0, nu=None,
p0=p0, pb=None, h0=h0
)
edac = EDACScheme(
['fluid'], [], dim=2, rho0=rho0, c0=c0, nu=None,
pb=p0, h=h0
)
s = SchemeChooser(default='tvf', wcsph=wcsph, tvf=tvf, edac=edac)
return s
When using multiple schemes it is important to recall that each scheme needs
different particle properties. The schemes set these extra properties for you.
In this example, the create_particles
method has the following code:
def create_particles(self):
# ...
fluid = get_particle_array(name='fluid', x=x, y=y, h=h)
self.scheme.setup_properties([fluid])
The line that calls setup_properties
passes a list of the particle arrays
to the scheme so the scheme can configure/setup any additional properties.
Periodicity¶
This is rather easily done with the code in the create_domain
method:
def create_domain(self):
return DomainManager(
xmin=0, xmax=L, ymin=0, ymax=L, periodic_in_x=True,
periodic_in_y=True
)
See also Simulating periodicity.
Post-processing¶
The code has a significant chunk of code for post-processing the results. This
is in the post_process
method. This demonstrates how to iterate over the
files and read the file data to calculate various quantities. In particular it
also demonstrates the use of the
pysph.tools.sph_evaluator.SPHEvaluator
class. For example consider
the method:
def _get_sph_evaluator(self, array):
if not hasattr(self, '_sph_eval'):
from pysph.tools.sph_evaluator import SPHEvaluator
equations = [
ComputeAveragePressure(dest='fluid', sources=['fluid'])
]
dm = self.create_domain()
sph_eval = SPHEvaluator(
arrays=[array], equations=equations, dim=2,
kernel=QuinticSpline(dim=2), domain_manager=dm
)
self._sph_eval = sph_eval
return self._sph_eval
This code, creates the evaluator, note that it just takes the particle arrays of interest, a set of equations (this can be as complex as the normal SPH equations, with groups and everything), the kernel, and a domain manager. The evaluator has two important methods:
- update_particle_arrays(…): this allows a user to update the arrays to a new set of values efficiently.
- evaluate: this actually performs the evaluation of the equations.
The example has this code which demonstrates these:
def _get_post_process_props(self, array):
# ...
sph_eval = self._get_sph_evaluator(array)
sph_eval.update_particle_arrays([array])
sph_eval.evaluate()
# ...
Note the use of the above methods.
A rigid sphere floating in an hydrostatic tank¶
This example demonstrates the API of running a rigid fluid coupling problem in PySPH. To run it one may do:
$ cd ~/pysph/pysph/examples/rigid_body/
$ python sphere_in_vessel_akinci.py
There are many command line options that this example provides, check them out with:
$ python sphere_in_vessel.py -h
The example source can be seen at sphere_in_vessel.py.
This example demonstrates:
- Setting up a simulation involving rigid bodies and fluid
- Discuss mainly about rigid fluid coupling
It is divided in to three parts:
- Create particles
- Create equations
- Run the application
Create particles¶
In this example, we have a tank with a resting fluid and a sphere falling into
the tank. Create three particle arrays, tank
, fluid
and cube
.
tank
and fluid
has to obey wcsph
scheme, where as cube
has to obey
rigid body equations.
def create_particles(self):
# elided
fluid = get_particle_array_wcsph(x=xf, y=yf, h=h, m=m, rho=rho,
name="fluid")
# elided
tank = get_particle_array_wcsph(x=xt, y=yt, h=h, m=m, rho=rho,
rad_s=rad_s, V=V, name="tank")
for name in ['fx', 'fy', 'fz']:
tank.add_property(name)
cube = get_particle_array_rigid_body(x=xc, y=yc, h=h, m=m, rho=rho,
rad_s=rad_s, V=V, cs=cs,
name="cube")
return [fluid, tank, cube]
We will discuss the reason for adding the properties \(fx\), \(fy\), \(fz\) to the
tank
particle array. The next step is to setup the equations.
Create equations¶
def create_equations(self):
equations = [
Group(equations=[
BodyForce(dest='cube', sources=None, gy=-9.81),
], real=False),
Group(equations=[
SummationDensity(
dest='fluid',
sources=['fluid'], ),
SummationDensityBoundary(
dest='fluid', sources=['tank', 'cube'], fluid_rho=1000.0)
]),
# Tait equation of state
Group(equations=[
TaitEOSHGCorrection(dest='fluid', sources=None, rho0=self.ro,
c0=self.co, gamma=7.0),
], real=False),
Group(equations=[
MomentumEquation(dest='fluid', sources=['fluid'],
alpha=self.alpha, beta=0.0, c0=self.co,
gy=-9.81),
AkinciRigidFluidCoupling(dest='fluid',
sources=['cube', 'tank']),
XSPHCorrection(dest='fluid', sources=['fluid', 'tank']),
]),
Group(equations=[
RigidBodyCollision(dest='cube', sources=['tank'], kn=1e5)
]),
Group(equations=[RigidBodyMoments(dest='cube', sources=None)]),
Group(equations=[RigidBodyMotion(dest='cube', sources=None)]),
]
return equations
A few points to note while dealing with Akinci formulation,
As a first point, while computing the density of the
fluid
due to solid, make sure to useSummationDensityBoundary
, because usualSummationDensity
computes density by considering the mass of the particle, where asSummationDensityBoundary
will compute it by considering the volume of the particle. This makes a lot of difference while dealing with heavy density variation flows.Apply
TaitEOSHGCorrection
so that there is no negative pressure.The force from the boundary (here it is tank) on fluid is computed using
AkinciRigidFluidCoupling
equation, but in a usual case we do it using the momentum equation. There are a few advantages by doing this. If we are computing the boundary force using the momentum equation, then one should compute the density of the boundary, then compute the pressure. Using such pressure we will compute the force. But usingAkinciRigidFluidCoupling
we don’t need to compute the pressure of the boundary because the force is dependent only on the fluid particle’s pressure.def loop(self, d_idx, d_m, d_rho, d_au, d_av, d_aw, d_p, s_idx, s_V, s_fx, s_fy, s_fz, DWIJ, s_m, s_p, s_rho): # elide d_au[d_idx] += -psi * _t1 * DWIJ[0] d_av[d_idx] += -psi * _t1 * DWIJ[1] d_aw[d_idx] += -psi * _t1 * DWIJ[2] s_fx[s_idx] += d_m[d_idx] * psi * _t1 * DWIJ[0] s_fy[s_idx] += d_m[d_idx] * psi * _t1 * DWIJ[1] s_fz[s_idx] += d_m[d_idx] * psi * _t1 * DWIJ[2]
Since in
AkinciRigidFluidCoupling
(more in next point) we compute both force on fluid by solid particle and force on solid by fluid particle, which makes our sources to hold the propertiesfx
,fy
andfz
.Here first few equations deal with the simulation of fluid in hydrostatic tank. The equation dealing with rigid fluid coupling is
AkinciRigidFluidCoupling
. Coupling equation will deal with forces exerted by fluid on solid body, and forces exerted by solid on fluid. We find the force on fluid by solid and force on the solid by fluid in a singe equation.Usually in an SPH equation, we tend to change properties only of a destination particle array, but in this case, both destination and sources properties are manipulated.
The final equations deal with the dynamics of rigid bodies, which are discussed in other example files.
Run the application¶
Finally run the application by
if __name__ == '__main__':
app = RigidFluidCoupling()
app.run()
Flow past a circular cylinder using open boundary conditions¶
This example demonstrates the API of inlet and outlet boundary conditions in PySPH. The flow past a circular cylinder is an example which uses both inlet and outlet boundary conditions. To run it one may do:
$ pysph run flow_past_cylinder_2d
There are many command line options that this example provides, check them out with:
$ pysph run flow_past_cylinder_2d -h
In this example, we have a wind tunnel with two bounding slip walls on the top
and bottom of the tunnel. The inlet is on the left and the outlet is on the
right. In order to perform the simulation five particle arrays, solid
,
fluid
, wall
, inlet
and outlet
are generated. fluid
,
solid
and wall
has to solved using edac
scheme, whereas inlet
and outlet
are solved according to the equations provided by the Inlet
Outlet Manager (IOM). The example source can be seen at
flow_past_cylinder_2d.py.
This example demonstrates:
- Setting up a wind tunnel kind of simulation.
- Setting up inlet and outlet boundary condition
- Force evaluation on the solid body of interest
The IOM is created in the Application
instance however, it is passed
to a Scheme
instance and most of its methods are called in the
scheme only. We discuss the implementation in the EDAC Scheme
in
Writing inlet oulet manager. The IOM has the following functions:
- Creation of ghost particle arrays
- Creation of inlet outlet stepper
- Creation of inlet outlet equations
- Creation of inlet outlet updater
The following are discussed in detail:
- Construction of IOM
- Passing IOM to the scheme
- Creating ghost particles
- Creating updater
- Overall setup
- Evaluating forces on solid
Construction of IOM¶
def _get_io_info(self):
from pysph.sph.bc.hybrid.inlet import Inlet
from pysph.sph.bc.hybrid.outlet import Outlet
from pysph.sph.bc.hybrid.simple_inlet_outlet import (
SimpleInletOutlet)
i_update_cls = Inlet
o_update_cls = Outlet
o_has_ghost = False
manager = SimpleInletOutlet
props_to_copy += ['uta', 'pta', 'u0', 'v0', 'w0', 'p0']
inlet_info = InletInfo(
pa_name='inlet', normal=[-1.0, 0.0, 0.0],
refpoint=[0.0, 0.0, 0.0], equations=inleteqns,
has_ghost=i_has_ghost, update_cls=i_update_cls,
umax=umax
)
outlet_info = OutletInfo(
pa_name='outlet', normal=[1.0, 0.0, 0.0],
refpoint=[self.Lt, 0.0, 0.0], has_ghost=o_has_ghost,
update_cls=o_update_cls, equations=None,
props_to_copy=props_to_copy
)
return inlet_info, outlet_info, manager
def _create_inlet_outlet_manager(self):
inlet_info, outlet_info, manager = self._get_io_info()
iom = manager(
fluid_arrays=['fluid'], inletinfo=[inlet_info],
outletinfo=[outlet_info]
)
return iom
In the function _get_io_info
the inlet_info
and outlet_info
are
created, and manager class are returned. The inlet_info
and outlet_info
info contains specific information about inlet and outlet that enables IOM to
create equations, stepper and updater. In _create_inlet_outlet_manager
the IOM is created using the info objects.
Note that the extra properties required by the equations are also passed by the IOM.
Passing IOM to scheme¶
def configure_scheme(self):
scheme = self.scheme
self.iom = self._create_inlet_outlet_manager()
scheme.inlet_outlet_manager = self.iom
pfreq = 100
kernel = QuinticSpline(dim=2)
self.iom.update_dx(self.dx)
scheme.configure(h=self.h, nu=self.nu)
scheme.configure_solver(kernel=kernel, tf=self.tf, dt=self.dt,
pfreq=pfreq, n_damp=0)
The IOM object of the application is initialized in the method
configure_scheme
of the Application
class. All the post-initialization
method which require data from user could be called here e.g. update_dx
.
Creating ghost particles¶
def create_particles(self):
fluid = self._create_fluid()
solid = self._create_solid()
outlet = self._create_outlet()
inlet = self._create_inlet()
wall = self._create_wall()
ghost_inlet = self.iom.create_ghost(inlet, inlet=True)
ghost_outlet = self.iom.create_ghost(outlet, inlet=False)
particles = [fluid, inlet, outlet, solid, wall]
if ghost_inlet:
particles.append(ghost_inlet)
if ghost_outlet:
particles.append(ghost_outlet)
self.scheme.setup_properties(particles)
self._set_wall_normal(wall)
if self.io_method == 'hybrid':
fluid.uag[:] = umax
fluid.uta[:] = umax
outlet.uta[:] = umax
return particles
The particle arrays ghost_inlet
and ghost_outlet
are generated by
the IOM depending upon the type of IOM subclass used. The properties
\(uag\), \(uta\) are the time average and velocity array in \(x\)
direction at t=0.
Creating updater¶
The purpose of the updater is to remove particle from inlet
and add them to
fluid
whenever a particle crosses the inlet-outlet interface. Similarly, it
is done in case of the oulet
. It also adds new particle to inlet
as
required and remove a particle from the simulation when they flow past
outlet
.
def create_inlet_outlet(self, particle_arrays):
iom = self.iom
io = iom.get_inlet_outlet(particle_arrays)
return io
the function create_inlet_outlet
takes the updater io
created by the
IOM and plugs it into the update routine of the application class automatically.
Overall setup¶
In order to run the simulation, the IOM object must be passed to the scheme. In the scheme, the IOM object must be implemented in the manner as described in Writing inlet oulet manager.
A few points to note while dealing with inlet outlet boundary condition,
Construction of the IOM happens after the scheme is created with a
void
IOM.def create_scheme(self): h = nu = None s = EDACScheme( ['fluid'], ['solid'], dim=2, rho0=rho, c0=c0, h=h, pb=p0, nu=nu, inlet_outlet_manager=None, inviscid_solids=['wall'] ) return s
The IOM must be configured in the
configure_scheme
function.In case you change the integrator of the function, make sure the updater
io
is updating in the appropriate stage. For example, in case of aPECIntegrator
class of integrator, the particles integrated half step in stage 1 and finally advected in stage 2 thenio
updates the particle arrays after stage 2 is complete. In case one wants to do the update in stage 1 (while using another integrator) the arguments must be passed to the updater appropriately.
Evaluating forces on solid¶
The force on the fluid particles is evaluated using
In order to evaluate the forces, the solid
is considered as fluid and
force is evaluated by solving the following equations
equations = [
Group(
equations=[
SummationDensity(dest='fluid', sources=['fluid', 'solid']),
SummationDensity(dest='solid', sources=['fluid', 'solid']),
SetWallVelocity(dest='solid', sources=['fluid']),
], real=False),
Group(
equations=[
# Pressure gradient terms
MomentumEquationPressureGradient(
dest='solid', sources=['fluid'], pb=p0),
SolidWallNoSlipBCReverse(
dest='solid', sources=['fluid'], nu=self.nu),
], real=True),
]
The equations are solved on the output saved as *.npz files. In the
equation SolidWallNoSlipBCReverse
we are just reversing the sign of the
velocity difference unlike the usual equation where \(u - u_g\) is used.
The total force is evaluated by multiplying the acceleration with the mass of
the solid particles
fxp = sum(solid.m*solid.au)
fyp = sum(solid.m*solid.av)
fxf = sum(solid.m*solid.auf)
fyf = sum(solid.m*solid.avf)
fx = fxf + fxp
fy = fyf + fyp
Here, the au
is acceleration due to pressure and auf
is due to shear
stress. The force fx
provides the drag force and fy
provides the lift
force.
- The Taylor-Green Vortex: the Taylor-Green Vortex problem in 2D.
- A rigid sphere floating in an hydrostatic tank: A sphere floating in a hydrostatic tank example.
- Flow past a circular cylinder using open boundary conditions: Flow past a circular cylinder in 2D.
Reference documentation¶
Autogenerated from doc strings using sphinx’s autodoc feature.
PySPH Reference Documentation¶
Autogenerated from doc strings using sphinx’s autodoc feature.
Module application¶
-
class
pysph.solver.application.
Application
(fname=None, output_dir=None, domain=None)[source]¶ Bases:
object
Subclass this to run any SPH simulation. There are several important methods that this class provides. The application is typically used as follows:
class EllipticalDrop(Application): def create_particles(self): # ... def create_scheme(self): # ... ... app = EllipticalDrop() app.run() app.post_process(app.info_filename)
The
post_process()
method is entirely optional and typically performs the post-processing. It is important to understand the correct sequence of the method calls. When theApplication
instance is created, the following methods are invoked by the__init__()
method:initialize()
: use this to setup any constants etc.create_scheme()
: this needs to be overridden if one wishes to use apysph.sph.scheme.Scheme
. If one does not want to use a scheme, thecreate_equations()
andcreate_solver()
methods must be overridden.self.scheme.add_user_options()
: i.e. the scheme’s command line options are added, if there is a scheme.add_user_options()
: add any user specified command line options.
When
app.run()
is called, the following methods are called in order:_parse_command_line()
: this is a private method but it is important to note that the command line arguments are first parsed.consume_user_options()
: this is called right after the command line args are parsed.configure_scheme()
: This is where one may configure the scheme according to the passed command line arguments.create_solver()
: Create the solver, note that this is needed only if one has not used a scheme, otherwise, this will by default return the solver created by the scheme chosen.create_equations()
: Create any equations. Defaults to letting the scheme generate and return the desired equations.create_particles()
create_inlet_outlet()
create_domain()
: Not needed for non-periodic domains.create_nnps()
: Not needed unless one wishes to override the default NNPS.create_tools()
: Add anypysph.solver.tools.Tool
instances.customize_output()
: Customize the output visualization.
Additionally, as the application runs there are several convenient optional callbacks setup:
pre_step()
: Called before each time step.post_stage()
: Called after every stage of the integration.post_step()
: Called after each time step.
Finally, it is a good idea to overload the
post_process()
method to perform any post processing for the generated data.The application instance also has several important attributes, some of these are as follows:
args
: command line arguments, typicallysys.argv[1:]
.domain
: optionalpysph.base.nnps_base.DomainManager
instance.fname
: filename pattern to use when dumping output.inlet_outlet
: list of inlet/outlets.nnps
: instance ofpysph.base.nnps_base.NNPS
.num_procs
: total number of processes running.output_dir
: Output directory.parallel_manager
: in parallel, an instance ofpysph.parallel.parallel_manager.ParallelManager
.particles
: list ofpysph.base.particle_array.ParticleArray
.rank
: Rank of this process.scheme
: the optionalpysph.sph.scheme.Scheme
instance.solver
: the solver instance,pysph.solver.solver.Solver
.tools
: a list of possiblepysph.solver.tools.Tool
.
Constructor
Parameters: - fname (str) – file name to use for the output files.
- output_dir (str) – output directory name.
- domain (pysph.base.nnps_base.DomainManager) – A domain manager to use. This is used for periodic domains etc.
-
add_tool
(tool)[source]¶ Add a
pysph.solver.tools.Tool
instance to the application.
-
add_user_options
(group)[source]¶ Add any user-defined options to the given option group.
Note
This uses the argparse module.
-
configure_scheme
()[source]¶ This is called after
consume_user_options()
is called. One can configure the SPH scheme here as at this point all the command line options are known.
-
consume_user_options
()[source]¶ This is called right after the command line arguments are parsed.
All the parsed options are available in
self.options
and can be used in this method.This is meant to be overridden by users to setup any internal variables etc. that depend on the command line arguments passed. Note that this method is called well before the solver or particles are created.
-
create_domain
()[source]¶ Create a pysph.base.nnps_base.DomainManager and return it if needed.
This is used for periodic domains etc. Note that if the domain is passed to
__init__()
, then this method is not called.
-
create_inlet_outlet
(particle_arrays)[source]¶ Create inlet and outlet objects and return them as a list.
The method is passed a dictionary of particle arrays keyed on the name of the particle array.
-
create_nnps
()[source]¶ Create any NNPS if desired and return it, else a default NNPS will be created automatically.
-
create_scheme
()[source]¶ Create a suitable SPH scheme and return it.
Note that this method is called after the arguments are all processed and after
consume_user_options()
is called.
-
create_tools
()[source]¶ Create any tools and return a sequence of them. This method is called after particles/inlets etc. are all setup, configured etc.
-
customize_output
()[source]¶ Customize the output file visualization by adding any files.
For example, the pysph view command will look for a
mayavi_config.py
file that can be used to script the viewer. You can use self._mayavi_config(‘code’) to add a default customization here.Note that this is executed before the simulation starts.
-
post_process
(info_fname_or_directory)[source]¶ Given an info filename or a directory containing the info file, read the information and do any post-processing of the results. Please overload the method to perform any processing.
The info file has a few useful attributes and can be read using the
read_info()
method.The output_files property should provide the output files generated.
-
post_stage
(current_time, dt, stage)[source]¶ If overloaded, this is called automatically after each integrator stage, i.e. if the integrator is a two stage integrator it will be called after the first and second stages.
The method is passed (current_time, dt, stage). See the the
pysph.sph.integrator.Integrator.one_timestep()
methods for examples of how this is called.
-
post_step
(solver)[source]¶ If overloaded, this is called automatically after each integrator step. The method is passed the solver instance.
-
pre_step
(solver)[source]¶ If overloaded, this is called automatically before each integrator step. The method is passed the solver instance.
-
read_info
(fname_or_dir)[source]¶ Read the information from the given info file (or directory containing the info file, the first found info file will be used).
-
run
(argv=None)[source]¶ Run the application.
This basically calls
setup()
and thensolve()
.Parameters: argv (list) – Optional command line arguments. Handy when running interactively.
-
setup
(argv=None)[source]¶ Setup the application.
This may be used to setup the various pieces of infrastructure to run an SPH simulation, for example, this will parse the command line arguments passed, setup the scheme, solver, equations etc. It will not call the solver’s solve method though. This can be useful if you wish to manually run the solver.
Parameters: argv (list) – Optional command line arguments. Handy when running interactively.
Module controller¶
Implement infrastructure for the solver to add various interfaces
-
class
pysph.solver.controller.
CommandManager
(solver, comm=None)[source]¶ Bases:
object
Class to manage and synchronize commands from various Controllers
-
add_function
(callable, interval=1)[source]¶ add a function to to be called every interval iterations
-
add_interface
(callable, block=True)[source]¶ Add a callable interface to the controller
The callable must accept an Controller instance argument. The callable is called in a new thread of its own and it can do various actions with methods defined on the Controller instance passed to it The new created thread is set to daemon mode and returned
-
get_particle_array_combined
(idx, procs=None)[source]¶ get a single particle array with combined data from all procs
specifying processes is currently not implemented
-
-
class
pysph.solver.controller.
Controller
(command_manager, block=True)[source]¶ Bases:
object
Controller class acts a a proxy to control the solver
This is passed as an argument to the interface
Methods available:
- get – get the value of a solver parameter
- set – set the value of a solver parameter
- get_result – return result of a queued command
- pause_on_next – pause solver thread on next iteration
- wait – wait (block) calling thread till solver is paused (call after pause_on_next)
- cont – continue solver thread (call after pause_on_next)
Various other methods are also available as listed in
CommandManager.dispatch_dict
which perform different functions.- The methods in CommandManager.active_methods do their operation and return the result (if any) immediately
- The methods in CommandManager.lazy_methods do their later when solver thread is available and return a task-id. The result of the task can be obtained later using the blocking call get_result() which waits till result is available and returns the result. The availability of the result can be checked using the lock returned by get_task_lock() method
FIXME: wait/cont currently do not work in parallel
-
cont
()[source]¶ continue solver thread after it has been paused by pause_on_next
call this only after calling the pause_on_next method
-
set_blocking
(block)[source]¶ set the blocking mode to True/False
In blocking mode (block=True) all methods other than getting of solver properties block until the command is executed by the solver and return the results. The blocking time can vary depending on the time taken by solver per iteration and the command_interval In non-blocking mode, these methods queue the command for later and return a string corresponding to the task_id of the operation. The result can be later obtained by a (blocking) call to get_result with the task_id as argument
-
class
pysph.solver.controller.
DummyComm
[source]¶ Bases:
object
A dummy MPI.Comm implementation as placeholder for for serial runs
SPH equations¶
-
class
pysph.sph.equation.
Equation
(dest, sources)[source]¶ Bases:
object
Parameters: - dest (str) – name of the destination particle array
- sources (list of str or None) – names of the source particle arrays
Basic SPH Equations¶
References
[Monaghan1992] | J. Monaghan, Smoothed Particle Hydrodynamics, “Annual Review of Astronomy and Astrophysics”, 30 (1992), pp. 543-574. |
[Monaghan2005] | J. Monaghan, “Smoothed particle hydrodynamics”, Reports on Progress in Physics, 68 (2005), pp. 1703-1759. |
-
class
pysph.sph.basic_equations.
BodyForce
(dest, sources, fx=0.0, fy=0.0, fz=0.0)[source]¶ Bases:
pysph.sph.equation.Equation
Add a body force to the particles:
\(\boldsymbol{f} = f_x, f_y, f_z\)
Parameters: - fx (float) – Body force per unit mass along the x-axis
- fy (float) – Body force per unit mass along the y-axis
- fz (float) – Body force per unit mass along the z-axis
-
class
pysph.sph.basic_equations.
ContinuityEquation
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Density rate:
\(\frac{d\rho_a}{dt} = \sum_b m_b \boldsymbol{v}_{ab}\cdot \nabla_a W_{ab}\)
Parameters: - dest (str) – name of the destination particle array
- sources (list of str or None) – names of the source particle arrays
-
class
pysph.sph.basic_equations.
IsothermalEOS
(dest, sources, rho0, c0, p0)[source]¶ Bases:
pysph.sph.equation.Equation
Compute the pressure using the Isothermal equation of state:
\(p = p_0 + c_0^2(\rho_0 - \rho)\)
Parameters: - rho0 (float) – Reference density of the fluid (\(\rho_0\))
- c0 (float) – Maximum speed of sound expected in the system (\(c0\))
- p0 (float) – Reference pressure in the system (\(p0\))
-
class
pysph.sph.basic_equations.
MonaghanArtificialViscosity
(dest, sources, alpha=1.0, beta=1.0)[source]¶ Bases:
pysph.sph.equation.Equation
Classical Monaghan style artificial viscosity [Monaghan2005]
\[\frac{d\mathbf{v}_{a}}{dt}&=&-\sum_{b}m_{b}\Pi_{ab}\nabla_{a}W_{ab}\]where
\[\begin{split}\Pi_{ab}=\begin{cases}\frac{-\alpha_{\pi}\bar{c}_{ab}\phi_{ab}+ \beta_{\pi}\phi_{ab}^{2}}{\bar{\rho}_{ab}}, & \mathbf{v}_{ab}\cdot \mathbf{r}_{ab}<0\\0, & \mathbf{v}_{ab}\cdot\mathbf{r}_{ab}\geq0 \end{cases}\end{split}\]with
\[ \begin{align}\begin{aligned}\begin{split}\phi_{ab}=\frac{h\mathbf{v}_{ab}\cdot\mathbf{r}_{ab}} {|\mathbf{r}_{ab}|^{2}+\epsilon^{2}}\\\end{split}\\\begin{split}\bar{c}_{ab}&=&\frac{c_{a}+c_{b}}{2}\\\end{split}\\\bar{\rho}_{ab}&=&\frac{\rho_{a}+\rho_{b}}{2}\end{aligned}\end{align} \]Parameters: - alpha (float) – produces a shear and bulk viscosity
- beta (float) – used to handle high Mach number shocks
-
class
pysph.sph.basic_equations.
SummationDensity
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Good old Summation density:
\(\rho_a = \sum_b m_b W_{ab}\)
Parameters: - dest (str) – name of the destination particle array
- sources (list of str or None) – names of the source particle arrays
-
class
pysph.sph.basic_equations.
VelocityGradient2D
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Compute the SPH evaluation for the velocity gradient tensor in 2D.
The expression for the velocity gradient is:
\(\frac{\partial v^i}{\partial x^j} = \sum_{b}\frac{m_b}{\rho_b}(v_b - v_a)\frac{\partial W_{ab}}{\partial x_a^j}\)
Notes
The tensor properties are stored in the variables v_ij where ‘i’ refers to the velocity component and ‘j’ refers to the spatial component. Thus v_10 is \(\frac{\partial v}{\partial x}\)
Parameters: - dest (str) – name of the destination particle array
- sources (list of str or None) – names of the source particle arrays
-
class
pysph.sph.basic_equations.
VelocityGradient3D
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Compute the SPH evaluation for the velocity gradient tensor in 2D.
The expression for the velocity gradient is:
\(\frac{\partial v^i}{\partial x^j} = \sum_{b}\frac{m_b}{\rho_b}(v_b - v_a)\frac{\partial W_{ab}}{\partial x_a^j}\)
Notes
The tensor properties are stored in the variables v_ij where ‘i’ refers to the velocity component and ‘j’ refers to the spatial component. Thus v_21 is \(\frac{\partial v}{\partial x}\)
Parameters: - dest (str) – name of the destination particle array
- sources (list of str or None) – names of the source particle arrays
-
class
pysph.sph.basic_equations.
XSPHCorrection
(dest, sources, eps=0.5)[source]¶ Bases:
pysph.sph.equation.Equation
Position stepping with XSPH correction [Monaghan1992]
\[\frac{d\mathbf{r}_{a}}{dt}=\mathbf{\hat{v}}_{a}=\mathbf{v}_{a}- \epsilon\sum_{b}m_{b}\frac{\mathbf{v}_{ab}}{\bar{\rho}_{ab}}W_{ab}\]Parameters: eps (float) – \(\epsilon\) as in the above equation Notes
This equation must be used to advect the particles. XSPH can be turned off by setting the parameter
eps = 0
.
-
class
pysph.sph.basic_equations.
XSPHCorrectionForLeapFrog
(dest, sources, eps=0.5)[source]¶ Bases:
pysph.sph.equation.Equation
The XSPH correction [Monaghan1992] alone. This is meant to be used with a leap-frog integrator which already considers the velocity of the particles. It simply computes the correction term and adds that to
ax, ay, az
.\[\frac{d\mathbf{r}_{a}}{dt}=\mathbf{\hat{v}}_{a}= - \epsilon\sum_{b}m_{b}\frac{\mathbf{v}_{ab}}{\bar{\rho}_{ab}}W_{ab}\]Parameters: eps (float) – \(\epsilon\) as in the above equation Notes
This equation must be used to advect the particles. XSPH can be turned off by setting the parameter
eps = 0
.
Basic WCSPH Equations¶
-
class
pysph.sph.wc.basic.
ContinuityEquationDeltaSPH
(dest, sources, c0, delta=0.1)[source]¶ Bases:
pysph.sph.equation.Equation
Continuity equation with dissipative terms
\(\frac{d\rho_a}{dt} = \sum_b \rho_a \frac{m_b}{\rho_b} \left( \boldsymbol{v}_{ab}\cdot \nabla_a W_{ab} + \delta \eta_{ab} \cdot \nabla_{a} W_{ab} (h_{ab}\frac{c_{ab}}{\rho_a}(\rho_b - \rho_a)) \right)\)
References
[Marrone2011] S. Marrone et al., “delta-SPH model for simulating violent impact flows”, Computer Methods in Applied Mechanics and Engineering, 200 (2011), pp 1526–1542. Parameters: - c0 (float) – reference speed of sound
- delta (float) – coefficient used to control the intensity of diffusion of density
-
class
pysph.sph.wc.basic.
ContinuityEquationDeltaSPHPreStep
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Continuity equation with dissipative terms See
pysph.sph.wc.basic.ContinuityEquationDeltaSPH
The matrix \(L_a\) is multiplied to \(\nabla W_{ij}\) in thepysph.sph.scheme.WCSPHScheme
class by usingpysph.sph.wc.kernel_correction.GradientCorrectionPreStep
andpysph.sph.wc.kernel_correction.GradientCorrection
.Parameters: - dest (str) – name of the destination particle array
- sources (list of str or None) – names of the source particle arrays
-
class
pysph.sph.wc.basic.
MomentumEquation
(dest, sources, c0, alpha=1.0, beta=1.0, gx=0.0, gy=0.0, gz=0.0, tensile_correction=False)[source]¶ Bases:
pysph.sph.equation.Equation
Classic Monaghan Style Momentum Equation with Artificial Viscosity
\[\frac{d\mathbf{v}_{a}}{dt}=-\sum_{b}m_{b}\left(\frac{p_{b}} {\rho_{b}^{2}}+\frac{p_{a}}{\rho_{a}^{2}}+\Pi_{ab}\right) \nabla_{a}W_{ab}\]where
\[\begin{split}\Pi_{ab}=\begin{cases} \frac{-\alpha\bar{c}_{ab}\mu_{ab}+\beta\mu_{ab}^{2}}{\bar{\rho}_{ab}} & \mathbf{v}_{ab}\cdot\mathbf{r}_{ab}<0;\\ 0 & \mathbf{v}_{ab}\cdot\mathbf{r}_{ab}\geq0; \end{cases}\end{split}\]with
\[ \begin{align}\begin{aligned}\begin{split}\mu_{ab}=\frac{h\mathbf{v}_{ab}\cdot\mathbf{r}_{ab}} {\mathbf{r}_{ab}^{2}+\eta^{2}}\\\end{split}\\\begin{split}\bar{c}_{ab} = \frac{c_a + c_b}{2}\\\end{split}\\\bar{\rho}_{ab} = \frac{\rho_a + \rho_b}{2}\end{aligned}\end{align} \]References
[Monaghan1992] J. Monaghan, Smoothed Particle Hydrodynamics, “Annual Review of Astronomy and Astrophysics”, 30 (1992), pp. 543-574. Parameters: - c0 (float) – reference speed of sound
- alpha (float) – produces a shear and bulk viscosity
- beta (float) – used to handle high Mach number shocks
- gx (float) – body force per unit mass along the x-axis
- gy (float) – body force per unit mass along the y-axis
- gz (float) – body force per unit mass along the z-axis
- tensilte_correction (bool) – switch for tensile instability correction (Default: False)
-
class
pysph.sph.wc.basic.
MomentumEquationDeltaSPH
(dest, sources, rho0, c0, alpha=1.0)[source]¶ Bases:
pysph.sph.equation.Equation
Momentum equation defined in JOSEPHINE and the delta-SPH model
\[\frac{du_{i}}{dt}=-\frac{1}{\rho_{i}}\sum_{j}\left(p_{j}+p_{i}\right) \nabla_{i}W_{ij}V_{j}+\mathbf{g}_{i}+\alpha hc_{0}\rho_{0}\sum_{j} \pi_{ij}\nabla_{i}W_{ij}V_{j}\]where
\[\pi_{ij}=\frac{\mathbf{u}_{ij}\cdot\mathbf{r}_{ij}} {|\mathbf{r}_{ij}|^{2}}\]References
[Marrone2011] S. Marrone et al., “delta-SPH model for simulating violent impact flows”, Computer Methods in Applied Mechanics and Engineering, 200 (2011), pp 1526–1542. [Cherfils2012] J. M. Cherfils et al., “JOSEPHINE: A parallel SPH code for free-surface flows”, Computer Physics Communications, 183 (2012), pp 1468–1480. Parameters: - rho0 (float) – reference density
- c0 (float) – reference speed of sound
- alpha (float) – coefficient used to control the intensity of the diffusion of velocity
Notes
Artificial viscosity is used in this momentum equation and is controlled by the parameter \(\alpha\). This form of the artificial viscosity is similar but not identical to the Monaghan-style artificial viscosity.
-
class
pysph.sph.wc.basic.
PressureGradientUsingNumberDensity
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Pressure gradient discretized using number density:
\[\frac{d \boldsymbol{v}_a}{dt} = -\frac{1}{m_a}\sum_b (\frac{p_a}{V_a^2} + \frac{p_b}{V_b^2})\nabla_a W_{ab}\]Parameters: - dest (str) – name of the destination particle array
- sources (list of str or None) – names of the source particle arrays
-
class
pysph.sph.wc.basic.
TaitEOS
(dest, sources, rho0, c0, gamma, p0=0.0)[source]¶ Bases:
pysph.sph.equation.Equation
Tait equation of state for water-like fluids
\(p_a = \frac{c_{0}^2\rho_0}{\gamma}\left( \left(\frac{\rho_a}{\rho_0}\right)^{\gamma} -1\right)\)
References
[Cole1948] H. R. Cole, “Underwater Explosions”, Princeton University Press, 1948. [Batchelor2002] G. Batchelor, “An Introduction to Fluid Dynamics”, Cambridge University Press, 2002. [Monaghan2005] J. Monaghan, “Smoothed particle hydrodynamics”, Reports on Progress in Physics, 68 (2005), pp. 1703-1759. Parameters: - rho0 (float) – reference density of fluid particles
- c0 (float) – maximum speed of sound expected in the system
- gamma (float) – constant
- p0 (float) – reference pressure in the system (defaults to zero).
Notes
The reference speed of sound, c0, is to be taken approximately as 10 times the maximum expected velocity in the system. The particle sound speed is given by the usual expression:
\(c_a = \sqrt{\frac{\partial p}{\partial \rho}}\)
-
class
pysph.sph.wc.basic.
TaitEOSHGCorrection
(dest, sources, rho0, c0, gamma)[source]¶ Bases:
pysph.sph.equation.Equation
Tait Equation of State with Hughes and Graham Correction
\[p_a = \frac{c_{0}^2\rho_0}{\gamma}\left( \left(\frac{\rho_a}{\rho_0}\right)^{\gamma} -1\right)\]where
\[\begin{split}\rho_{a}=\begin{cases}\rho_{a} & \rho_{a}\geq\rho_{0}\\ \rho_{0} & \rho_{a}<\rho_{0}\end{cases}`\end{split}\]References
[Hughes2010] J. P. Hughes and D. I. Graham, “Comparison of incompressible and weakly-compressible SPH models for free-surface water flows”, Journal of Hydraulic Research, 48 (2010), pp. 105-117. Parameters: - rho0 (float) – reference density
- c0 (float) – reference speed of sound
- gamma (float) – constant
Notes
The correction is to be applied on boundary particles and imposes a minimum value of the density (rho0) which is set upon instantiation. This correction avoids particle sticking behaviour at walls.
-
class
pysph.sph.wc.basic.
UpdateSmoothingLengthFerrari
(dest, sources, dim, hdx)[source]¶ Bases:
pysph.sph.equation.Equation
Update the particle smoothing lengths
\(h_a = hdx \left(\frac{m_a}{\rho_a}\right)^{\frac{1}{d}}\)
References
[Ferrari2009] A. Ferrari et al., “A new 3D parallel SPH scheme for free surface flows”, Computers and Fluids, 38 (2009), pp. 1203–1217. Parameters: - dim (float) – number of dimensions
- hdx (float) – scaling factor
Notes
Ideally, the kernel scaling factor should be determined from the kernel used based on a linear stability analysis. The default value of (hdx=1) reduces to the formulation suggested by Ferrari et al. who used a Cubic Spline kernel.
Typically, a change in the smoothing length should mean the neighbors are re-computed which in PySPH means the NNPS must be updated. This equation should therefore be placed as the last equation so that after the final corrector stage, the smoothing lengths are updated and the new NNPS data structure is computed.
Note however that since this is to be used with incompressible flow equations, the density variations are small and hence the smoothing lengths should also not vary too much.
Viscosity functions
-
class
pysph.sph.wc.viscosity.
ClearyArtificialViscosity
(dest, sources, dim, alpha=1.0)[source]¶ Bases:
pysph.sph.equation.Equation
Artificial viscosity proposed By P. Cleary:
\[\mathcal{Pi}_{ab} = -\frac{16}{\mu_a \mu_b}{\rho_a \rho_b (\mu_a + \mu_b)}\left( \frac{\boldsymbol{v}_{ab} \cdot \boldsymbol{r}_{ab}}{\boldsymbol{r}_{ab}^2 + \epsilon} \right),\]where the viscosity is determined from the parameter \(\alpha\) as
\[\mu_a = \frac{1}{8}\alpha h_a c_a \rho_a\]This equation is described in the 2005 review paper by Monaghan
- J. J. Monaghan, “Smoothed Particle Hydrodynamics”, Reports on Progress in Physics, 2005, 68, pp 1703–1759 [JM05]
-
class
pysph.sph.wc.viscosity.
LaminarViscosity
(dest, sources, nu, eta=0.01)[source]¶ Bases:
pysph.sph.equation.Equation
-
class
pysph.sph.wc.viscosity.
LaminarViscosityDeltaSPH
(dest, sources, dim, rho0, nu)[source]¶ Bases:
pysph.sph.equation.Equation
See section 2 of the below reference [Sun2017]
References
[Sun2017] P. Sun, A. Colagrossi, S. Marrone, A. Zhang “The $delta$ plus-SPH model: simple procedures for a further improvement of the SPH scheme”, Computer Methods in Applied Mechanics and Engineering 315 (2017), pp. 25-49.
-
class
pysph.sph.wc.viscosity.
MonaghanSignalViscosityFluids
(dest, sources, alpha, h)[source]¶ Bases:
pysph.sph.equation.Equation
Transport Velocity Formulation¶
References
[Adami2012] | (1, 2) S. Adami et. al “A generalized wall boundary condition for smoothed particle hydrodynamics”, Journal of Computational Physics (2012), pp. 7057–7075. |
[Adami2013] | S. Adami et. al “A transport-velocity formulation for smoothed particle hydrodynamics”, Journal of Computational Physics (2013), pp. 292–307. |
-
class
pysph.sph.wc.transport_velocity.
ContinuityEquation
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Conservation of mass equation
Eq (6) in [Adami2012]:
\[\frac{d\rho_a}{dt} = \rho_a \sum_b \frac{m_b}{\rho_b} \boldsymbol{v}_{ab} \cdot \nabla_a W_{ab}\]Parameters: - dest (str) – name of the destination particle array
- sources (list of str or None) – names of the source particle arrays
-
class
pysph.sph.wc.transport_velocity.
ContinuitySolid
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Continuity equation for the solid’s ghost particles.
The key difference is that we use the ghost velocity ug, and not the particle velocity u.
Parameters: - dest (str) – name of the destination particle array
- sources (list of str or None) – names of the source particle arrays
-
class
pysph.sph.wc.transport_velocity.
MomentumEquationArtificialStress
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Artificial stress contribution to the Momentum Equation
\[\frac{d\boldsymbol{v}_a}{dt} = \frac{1}{m_a}\sum_b (V_a^2 + V_b^2)\left[ \frac{1}{2}(\boldsymbol{A}_a + \boldsymbol{A}_b) : \nabla_a W_{ab}\right]\]where the artificial stress terms are given by:
\[ \boldsymbol{A} = \rho \boldsymbol{v} (\tilde{\boldsymbol{v}} - \boldsymbol{v})\]Parameters: - dest (str) – name of the destination particle array
- sources (list of str or None) – names of the source particle arrays
-
class
pysph.sph.wc.transport_velocity.
MomentumEquationArtificialViscosity
(dest, sources, c0, alpha=0.1)[source]¶ Bases:
pysph.sph.equation.Equation
Artificial viscosity for the momentum equation
Eq. (11) in [Adami2012]:
\[\frac{d \boldsymbol{v}_a}{dt} = -\sum_b m_b \alpha h_{ab} c_{ab} \frac{\boldsymbol{v}_{ab}\cdot \boldsymbol{r}_{ab}}{\rho_{ab}\left(|r_{ab}|^2 + \epsilon \right)}\nabla_a W_{ab}\]where
\[ \begin{align}\begin{aligned}\begin{split}\rho_{ab} = \frac{\rho_a + \rho_b}{2}\\\end{split}\\\begin{split}c_{ab} = \frac{c_a + c_b}{2}\\\end{split}\\h_{ab} = \frac{h_a + h_b}{2}\end{aligned}\end{align} \]Parameters: - alpha (float) – constant
- c0 (float) – speed of sound
-
class
pysph.sph.wc.transport_velocity.
MomentumEquationPressureGradient
(dest, sources, pb, gx=0.0, gy=0.0, gz=0.0, tdamp=0.0)[source]¶ Bases:
pysph.sph.equation.Equation
Momentum equation for the Transport Velocity Formulation: Pressure
Eq. (8) in [Adami2013]:
\[\frac{d \boldsymbol{v}_a}{dt} = \frac{1}{m_a}\sum_b (V_a^2 + V_b^2)\left[-\bar{p}_{ab}\nabla_a W_{ab} \right]\]where
\[\bar{p}_{ab} = \frac{\rho_b p_a + \rho_a p_b}{\rho_a + \rho_b}\]Parameters: - pb (float) – background pressure
- gx (float) – Body force per unit mass along the x-axis
- gy (float) – Body force per unit mass along the y-axis
- gz (float) – Body force per unit mass along the z-axis
- tdamp (float) – damping time
Notes
This equation should have the destination as fluid and sources as fluid and boundary particles.
This function also computes the contribution to the background pressure and accelerations due to a body force or gravity.
The body forces are damped according to Eq. (13) in [Adami2012] to avoid instantaneous accelerations. By default, damping is neglected.
-
class
pysph.sph.wc.transport_velocity.
MomentumEquationViscosity
(dest, sources, nu)[source]¶ Bases:
pysph.sph.equation.Equation
Momentum equation for the Transport Velocity Formulation: Viscosity
Eq. (8) in [Adami2013]:
\[\frac{d \boldsymbol{v}_a}{dt} = \frac{1}{m_a}\sum_b (V_a^2 + V_b^2)\left[ \bar{\eta}_{ab}\hat{r}_{ab}\cdot \nabla_a W_{ab} \frac{\boldsymbol{v}_{ab}}{|\boldsymbol{r}_{ab}|}\right]\]where
\[\bar{\eta}_{ab} = \frac{2\eta_a \eta_b}{\eta_a + \eta_b}\]Parameters: nu (float) – kinematic viscosity
-
class
pysph.sph.wc.transport_velocity.
SetWallVelocity
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Extrapolating the fluid velocity on to the wall
Eq. (22) in [Adami2012]:
\[\tilde{\boldsymbol{v}}_a = \frac{\sum_b\boldsymbol{v}_b W_{ab}} {\sum_b W_{ab}}\]Notes
The destination particle array for this equation should define the filtered velocity variables \(uf, vf, wf\).
Parameters: - dest (str) – name of the destination particle array
- sources (list of str or None) – names of the source particle arrays
-
class
pysph.sph.wc.transport_velocity.
SolidWallNoSlipBC
(dest, sources, nu)[source]¶ Bases:
pysph.sph.equation.Equation
Solid wall boundary condition [Adami2012]
This boundary condition is to be used with fixed ghost particles in SPH simulations and is formulated for the general case of moving boundaries.
The velocity and pressure of the fluid particles is extrapolated to the ghost particles and these values are used in the equations of motion.
No-penetration:
Ghost particles participate in the continuity and state equations with fluid particles. This means as fluid particles approach the wall, the pressure of the ghost particles increases to generate a repulsion force that prevents particle penetration.
No-slip:
Extrapolation is used to set the dummy velocity of the ghost particles for viscous interaction. First, the smoothed velocity field of the fluid phase is extrapolated to the wall particles:
\[\tilde{v}_a = \frac{\sum_b v_b W_{ab}}{\sum_b W_{ab}}\]In the second step, for the viscous interaction in Eqs. (10) in [Adami2012] and Eq. (8) in [Adami2013], the velocity of the ghost particles is assigned as:
\[v_b = 2v_w -\tilde{v}_a,\]where \(v_w\) is the prescribed wall velocity and \(v_b\) is the ghost particle in the interaction.
Parameters: nu (float) – kinematic viscosity Notes
For this equation the destination particle array should be the fluid and the source should be ghost or boundary particles. The boundary particles must define a prescribed velocity \(u_0, v_0, w_0\)
-
class
pysph.sph.wc.transport_velocity.
SolidWallPressureBC
(dest, sources, rho0, p0, b=1.0, gx=0.0, gy=0.0, gz=0.0)[source]¶ Bases:
pysph.sph.equation.Equation
Solid wall pressure boundary condition [Adami2012]
This boundary condition is to be used with fixed ghost particles in SPH simulations and is formulated for the general case of moving boundaries.
The velocity and pressure of the fluid particles is extrapolated to the ghost particles and these values are used in the equations of motion.
Pressure boundary condition:
The pressure of the ghost particle is also calculated from the fluid particle by interpolation using:
\[p_g = \frac{\sum_f p_f W_{gf} + \boldsymbol{g - a_g} \cdot \sum_f \rho_f \boldsymbol{r}_{gf}W_{gf}}{\sum_f W_{gf}},\]where the subscripts g and f relate to the ghost and fluid particles respectively.
Density of the wall particle is then set using this pressure
\[\rho_w=\rho_0\left(\frac{p_w - \mathcal{X}}{p_0} + 1\right)^{\frac{1}{\gamma}}\]Parameters: - rho0 (float) – reference density
- p0 (float) – reference pressure
- b (float) – constant (default 1.0)
- gx (float) – Body force per unit mass along the x-axis
- gy (float) – Body force per unit mass along the y-axis
- gz (float) – Body force per unit mass along the z-axis
Notes
For a two fluid system (boundary, fluid), this equation must be instantiated with boundary as the destination and fluid as the source.
The boundary particle array must additionally define a property \(wij\) for the denominator in Eq. (27) from [Adami2012]. This array sums the kernel terms from the ghost particle to the fluid particle.
-
class
pysph.sph.wc.transport_velocity.
StateEquation
(dest, sources, p0, rho0, b=1.0)[source]¶ Bases:
pysph.sph.equation.Equation
Generalized Weakly Compressible Equation of State
\[p_a = p_0\left[ \left(\frac{\rho}{\rho_0}\right)^\gamma - b \right] + \mathcal{X}\]Notes
This is the generalized Tait’s equation of state and the suggested values in [Adami2013] are \(\mathcal{X} = 0\), \(\gamma=1\) and \(b = 1\).
The reference pressure \(p_0\) is calculated from the artificial sound speed and reference density:
\[p_0 = \frac{c^2\rho_0}{\gamma}\]Parameters: - p0 (float) – reference pressure
- rho0 (float) – reference density
- b (float) – constant (default 1.0).
-
class
pysph.sph.wc.transport_velocity.
SummationDensity
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Summation density with volume summation
In addition to the standard summation density, the number density for the particle is also computed. The number density is important for multi-phase flows to define a local particle volume independent of the material density.
\[ \begin{align}\begin{aligned}\begin{split}\rho_a = \sum_b m_b W_{ab}\\\end{split}\\\mathcal{V}_a = \frac{1}{\sum_b W_{ab}}\end{aligned}\end{align} \]Notes
Note that in the pysph implementation, V is the inverse volume of a particle, i.e. the equation computes V as follows:
\[\mathcal{V}_a = \sum_b W_{ab}\]For this equation, the destination particle array must define the variable V for particle volume.
Parameters: - dest (str) – name of the destination particle array
- sources (list of str or None) – names of the source particle arrays
-
class
pysph.sph.wc.transport_velocity.
VolumeFromMassDensity
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Set the inverse volume using mass density
Parameters: - dest (str) – name of the destination particle array
- sources (list of str or None) – names of the source particle arrays
-
class
pysph.sph.wc.transport_velocity.
VolumeSummation
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Number density for volume computation
See SummationDensity
Note that the quantity V is really \(\sigma\) of the original paper, i.e. inverse of the particle volume.
Parameters: - dest (str) – name of the destination particle array
- sources (list of str or None) – names of the source particle arrays
Generalized Transport Velocity Formulation¶
Some notes on the paper,
- In the viscosity term of equation (17) a factor of ‘2’ is missing.
- A negative sign is missing from equation (22) i.e, either put a negative sign in equation (22) or at the integrator step equation(25).
- The Solid Mechanics Equations are not tested.
References
[ZhangHuAdams2017] | Chi Zhang, Xiangyu Y. Hu, Nikolaus A. Adams “A generalized transport-velocity formulation for smoothed particle hydrodynamics”, Journal of Computational Physics 237 (2017), pp. 216–232. |
-
class
pysph.sph.wc.gtvf.
ContinuityEquationGTVF
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Evolution of density
From [ZhangHuAdams2017], equation (12),
\[\frac{\tilde{d} \rho_i}{dt} = \rho_i \sum_j \frac{m_j}{\rho_j} \nabla W_{ij} \cdot \tilde{\boldsymbol{v}}_{ij}\]Parameters: - dest (str) – name of the destination particle array
- sources (list of str or None) – names of the source particle arrays
-
class
pysph.sph.wc.gtvf.
CorrectDensity
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Density correction
From [ZhangHuAdams2017], equation (13),
\[\rho_i = \frac{\sum_j m_j W_{ij}} {\min(1, \sum_j \frac{m_j}{\rho_j^{*}} W_{ij})}\]where,
\[\rho_j^{*} = \text{density before this correction is applied.}\]Parameters: - dest (str) – name of the destination particle array
- sources (list of str or None) – names of the source particle arrays
-
class
pysph.sph.wc.gtvf.
DeviatoricStressRate
(dest, sources, dim, G)[source]¶ Bases:
pysph.sph.equation.Equation
Stress rate for solids
From [ZhangHuAdams2017], equation (5),
\[\frac{d \boldsymbol{\sigma}'}{dt} = 2 G (\boldsymbol{\epsilon} - \frac{1}{3} \text{Tr}(\boldsymbol{\epsilon})\textbf{I}) + \boldsymbol{\sigma}' \cdot \boldsymbol{\Omega}^{T} + \boldsymbol{\Omega} \cdot \boldsymbol{\sigma}'\]where,
\[\boldsymbol{\Omega_{i/j}} = \frac{1}{2} \left(\nabla \otimes \boldsymbol{v}_{i/j} - (\nabla \otimes \boldsymbol{v}_{i/j})^{T}\right)\]\[\boldsymbol{\epsilon_{i/j}} = \frac{1}{2} \left(\nabla \otimes \boldsymbol{v}_{i/j} + (\nabla \otimes \boldsymbol{v}_{i/j})^{T}\right)\]see the class VelocityGradient for \(\nabla \otimes \boldsymbol{v}_i\)
Parameters: - dim (int) – Dimensionality of the problem.
- G (float) – value of shear modulus
-
class
pysph.sph.wc.gtvf.
GTVFIntegrator
(**kw)[source]¶ Bases:
pysph.sph.integrator.Integrator
Pass fluid names and suitable IntegratorStep instances.
For example:
>>> integrator = Integrator(fluid=WCSPHStep(), solid=WCSPHStep())
where “fluid” and “solid” are the names of the particle arrays.
-
one_timestep
(t, dt)[source]¶ User written function that actually does one timestep.
This function is used in the high-performance Cython implementation. The assumptions one may make are the following:
t and dt are passed.
the following methods are available:
- self.initialize()
- self.stage1(), self.stage2() etc. depending on the number of stages available.
- self.compute_accelerations(index=0, update_nnps=True)
- self.do_post_stage(stage_dt, stage_count_from_1)
- self.update_domain()
Please see any of the concrete implementations of the Integrator class to study. By default the Integrator implements a predict-evaluate-correct method, the same as PECIntegrator.
-
-
class
pysph.sph.wc.gtvf.
GTVFScheme
(fluids, solids, dim, rho0, c0, nu, h0, pref, gx=0.0, gy=0.0, gz=0.0, b=1.0, alpha=0.0)[source]¶ Bases:
pysph.sph.scheme.Scheme
Parameters: - fluids (list) – List of names of fluid particle arrays.
- solids (list) – List of names of solid particle arrays.
- dim (int) – Dimensionality of the problem.
- rho0 (float) – Reference density.
- c0 (float) – Reference speed of sound.
- nu (float) – Real viscosity of the fluid.
- h0 (float) – Reference smoothing length.
- pref (float) – reference pressure for rate of change of transport velocity.
- gx (float) – Body force acceleration components in x direction.
- gy (float) – Body force acceleration components in y direction.
- gz (float) – Body force acceleration components in z direction.
- b (float) – constant for the equation of state.
-
configure_solver
(kernel=None, integrator_cls=None, extra_steppers=None, **kw)[source]¶ Configure the solver to be generated.
Parameters: - kernel (Kernel instance.) – Kernel to use, if none is passed a default one is used.
- integrator_cls (pysph.sph.integrator.Integrator) – Integrator class to use, use sensible default if none is passed.
- extra_steppers (dict) – Additional integration stepper instances as a dict.
- **kw (extra arguments) – Any additional keyword args are passed to the solver instance.
-
class
pysph.sph.wc.gtvf.
GTVFStep
[source]¶
-
class
pysph.sph.wc.gtvf.
MomentumEquationArtificialStress
(dest, sources, dim)[source]¶ Bases:
pysph.sph.equation.Equation
Momentum equation Artificial stress for solids
See the class MomentumEquationPressureGradient for details.
Parameters: dim (int) – Dimensionality of the problem.
-
class
pysph.sph.wc.gtvf.
MomentumEquationArtificialStressSolid
(dest, sources, dim)[source]¶ Bases:
pysph.sph.equation.Equation
Momentum equation Artificial stress for solids
See the class MomentumEquationPressureGradient for details.
Parameters: dim (int) – Dimensionality of the problem.
-
class
pysph.sph.wc.gtvf.
MomentumEquationPressureGradient
(dest, sources, pref, gx=0.0, gy=0.0, gz=0.0)[source]¶ Bases:
pysph.sph.equation.Equation
Momentum Equation
From [ZhangHuAdams2017], equation (17),
\[\frac{\tilde{d} \boldsymbol{v}_i}{dt} = - \sum_j m_j \nabla W_{ij} \cdot \left[\left(\frac{p_i}{\rho_i^2} + \frac{p_j}{\rho_j^2} \right)\textbf{I} - \left(\frac{\boldsymbol{A_i}}{\rho_i^2} + \frac{\boldsymbol{A_j}}{\rho_j^2} \right)\right] + \sum_j \frac{\eta_{ij}\boldsymbol{v}_{ij}}{\rho_i \rho_j r_{ij}} \nabla W_{ij} \cdot \boldsymbol{x}_{ij}\]where,
\[\boldsymbol{A_{i/j}} = \rho_{i/j} \boldsymbol{v}_{i/j} \otimes (\tilde{\boldsymbol{v}}_{i/j} - \boldsymbol{v}_{i/j})\]\[\eta_{ij} = \frac{2\eta_i \eta_j}{\eta_i + \eta_j}\]\[\eta_{i/j} = \rho_{i/j} \nu\]for solids, replace \(\boldsymbol{A}_{i/j}\) with \(\boldsymbol{\sigma}'_{i/j}\).
The rate of change of transport velocity is given by,
\[(\frac{d\boldsymbol{v}_i}{dt})_c = -p_i^0 \sum_j \frac{m_j} {\rho_i^2} \nabla \tilde{W}_{ij}\]where,
\[\tilde{W}_{ij} = W(\boldsymbol{x}_ij, \tilde{0.5 h_{ij}})\]\[p_i^0 = \min(10|p_i|, p_{ref})\]Notes:
A negative sign in \((\frac{d\boldsymbol{v}_i}{dt})_c\) is missing in the paper [ZhangHuAdams2017].
Parameters: - pref (float) – reference pressure
- gx (float) – body force per unit mass along the x-axis
- gy (float) – body force per unit mass along the y-axis
- gz (float) – body force per unit mass along the z-axis
-
class
pysph.sph.wc.gtvf.
MomentumEquationViscosity
(dest, sources, nu)[source]¶ Bases:
pysph.sph.equation.Equation
Momentum equation Artificial stress for solids
See the class MomentumEquationPressureGradient for details.
Notes:
A factor of ‘2’ is missing in the viscosity equation given by [ZhangHuAdams2017].
Parameters: nu (float) – viscosity of the fluid.
-
class
pysph.sph.wc.gtvf.
VelocityGradient
(dest, sources, dim)[source]¶ Bases:
pysph.sph.equation.Equation
Gradient of velocity vector
\[(\nabla \otimes \tilde{\boldsymbol{v}})_i = \sum_j \frac{m_j} {\rho_j} \tilde{\boldsymbol{v}}_{ij} \otimes \nabla W_{ij}\]Parameters: dim (int) – Dimensionality of the problem.
-
class
pysph.sph.wc.density_correction.
MLSFirstOrder2D
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Moving Least Squares density reinitialization This is a first order density reinitialization
\[W_{ab}^{MLS} = \beta\left(\mathbf{r_{a}}\right)\cdot\left( \mathbf{r}_a - \mathbf{r}_b\right)W_{ab}\]\[\beta\left(\mathbf{r_{a}}\right) = A^{-1} \left[1 0 0\right]^{T}\]where
\[A = \sum_{b}W_{ab}\tilde{A}\frac{m_{b}}{\rho_{b}}\]\[\tilde{A} = pp^{T}\]where
\[p = \left[1 x_{a}-x_{b} y_{a}-y_{b}\right]^{T}\]\[\rho_{a} = \sum_{b} \m_{b}W_{ab}^{MLS}\]References
Parameters: - dest (str) – name of the destination particle array
- sources (list of str or None) – names of the source particle arrays
-
class
pysph.sph.wc.density_correction.
MLSFirstOrder3D
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Parameters: - dest (str) – name of the destination particle array
- sources (list of str or None) – names of the source particle arrays
-
class
pysph.sph.wc.density_correction.
ShepardFilter
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Shepard Filter density reinitialization This is a zeroth order density reinitialization
\[\tilde{W_{ab}} = \frac{W_{ab}}{\sum_{b} W_{ab}\frac{m_{b}} {\rho_{b}}}\]\[\rho_{a} = \sum_{b} \m_{b}\tilde{W_{ab}}\]References
Parameters: - dest (str) – name of the destination particle array
- sources (list of str or None) – names of the source particle arrays
Kernel Corrections¶
These are the equations for the kernel corrections that are mentioned in the paper by Bonet and Lok [BonetLok1999].
References
[BonetLok1999] | Bonet, J. and Lok T.-S.L. (1999) Variational and Momentum Preservation Aspects of Smoothed Particle Hydrodynamic Formulations. |
-
class
pysph.sph.wc.kernel_correction.
GradientCorrection
(dest, sources, dim=2, tol=0.1)[source]¶ Bases:
pysph.sph.equation.Equation
Kernel Gradient Correction
From [BonetLok1999], equations (42) and (45)
\[\nabla \tilde{W}_{ab} = L_{a}\nabla W_{ab}\]\[L_{a} = \left(\sum \frac{m_{b}}{\rho_{b}} \nabla W_{ab} \mathbf{\otimes}x_{ba} \right)^{-1}\]
-
class
pysph.sph.wc.kernel_correction.
GradientCorrectionPreStep
(dest, sources, dim=2)[source]¶ Bases:
pysph.sph.equation.Equation
-
class
pysph.sph.wc.kernel_correction.
KernelCorrection
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Kernel Correction
From [BonetLok1999], equation (53):
\[\mathbf{f}_{a} = \frac{\sum_{b}\frac{m_{b}}{\rho_{b}} \mathbf{f}_{b}W_{ab}}{\sum_{b}\frac{m_{b}}{\rho_{b}}W_{ab}}\]Parameters: - dest (str) – name of the destination particle array
- sources (list of str or None) – names of the source particle arrays
-
class
pysph.sph.wc.kernel_correction.
MixedGradientCorrection
(dest, sources, dim=2, tol=0.1)[source]¶ Bases:
pysph.sph.equation.Equation
Mixed Kernel Gradient Correction
This is as per [BonetLok1999]. See the MixedKernelCorrectionPreStep for the equations.
-
class
pysph.sph.wc.kernel_correction.
MixedKernelCorrectionPreStep
(dest, sources, dim=2)[source]¶ Bases:
pysph.sph.equation.Equation
Mixed Kernel Correction
From [BonetLok1999], equations (54), (57) and (58)
\[\tilde{W}_{ab} = \frac{W_{ab}}{\sum_{b} V_{b}W_{ab}}\]\[\nabla \tilde{W}_{ab} = L_{a}\nabla \bar{W}_{ab}\]where,
\[L_{a} = \left(\sum_{b} V_{b} \nabla \bar{W}_{ab} \mathbf{\otimes}x_{ba} \right)^{-1}\]\[\nabla \bar{W}_{ab} = \frac{\nabla W_{ab} - \gamma} {\sum_{b} V_{b}W_{ab}}\]\[\gamma = \frac{\sum_{b} V_{b}\nabla W_{ab}} {\sum_{b} V_{b}W_{ab}}\]
CRKSPH corrections
These are equations for the basic kernel corrections in [CRKSPH2017].
References
[CRKSPH2017] | Nicholas Frontiere, Cody D. Raskin, J. Michael Owen “CRKSPH - A Conservative Reproducing Kernel Smoothed Particle Hydrodynamics Scheme”, Journal of Computational Physics 332 (2017), pp. 160–209. |
-
class
pysph.sph.wc.crksph.
CRKSPH
(dest, sources, dim=2, tol=0.5)[source]¶ Bases:
pysph.sph.equation.Equation
Conservative Reproducing Kernel SPH
Equations from the paper [CRKSPH2017].
\[W_{ij}^{R} = A_{i}\left(1+B_{i}^{\alpha}x_{ij}^{\alpha} \right)W_{ij}\]\[\partial_{\gamma}W_{ij}^{R} = A_{i}\left(1+B_{i}^{\alpha} x_{ij}^{\alpha}\right)\partial_{\gamma}W_{ij} + \partial_{\gamma}A_{i}\left(1+B_{i}^{\alpha}x_{ij}^{\alpha} \right)W_{ij} + A_{i}\left(\partial_{\gamma}B_{i}^{\alpha} x_{ij}^{\alpha} + B_{i}^{\gamma}\right)W_{ij}\]\[\nabla\tilde{W}_{ij} = 0.5 * \left(\nabla W_{ij}^{R}-\nabla W_{ji}^{R} \right)\]where,
\[A_{i} = \left[m_{0} - \left(m_{2}^{-1}\right)^{\alpha \beta} m_1^{\beta}m_1^{\alpha}\right]^{-1}\]\[B_{i}^{\alpha} = -\left(m_{2}^{-1}\right)^{\alpha \beta} m_{1}^{\beta}\]\[\partial_{\gamma}A_{i} = -A_{i}^{2}\left(\partial_{\gamma} m_{0}-\left(m_{2}^{-1}\right)^{\alpha \beta}\left( m_{1}^{\beta}\partial_{\gamma}m_{1}^{\alpha} + \partial_{\gamma}m_{1}^{\beta}m_{1}^{\alpha}\right) + \left(m_{2}^{-1}\right)^{\alpha \phi}\partial_{\gamma} m_{2}^{\phi \psi}\left(m_{2}^{-1}\right)^{\psi \beta} m_{1}^{\beta}m_{1}^{\alpha} \right)\]\[\partial_{\gamma}B_{i}^{\alpha} = -\left(m_{2}^{-1}\right)^ {\alpha \beta}\partial_{\gamma}m_{1}^{\beta} + \left(m_{2}^{-1}\right)^ {\alpha \phi}\partial_{\gamma}m_{2}^{\phi \psi}\left(m_{2}^ {-1}\right)^{\psi \beta}m_{1}^{\beta}\]\[m_{0} = \sum_{j}V_{j}W_{ij}\]\[m_{1}^{\alpha} = \sum_{j}V_{j}x_{ij}^{\alpha}W_{ij}\]\[m_{2}^{\alpha \beta} = \sum_{j}V_{j}x_{ij}^{\alpha} x_{ij}^{\beta}W_{ij}\]\[\partial_{\gamma}m_{0} = \sum_{j}V_{j}\partial_{\gamma} W_{ij}\]\[\partial_{\gamma}m_{1}^{\alpha} = \sum_{j}V_{j}\left[ x_{ij}^{\alpha}\partial_{\gamma}W_{ij}+\delta^ {\alpha \gamma}W_{ij} \right]\]\[\partial_{\gamma}m_{2}^{\alpha \beta} = \sum_{j}V_{j}\left[ x_{ij}^{\alpha}x_{ij}^{\beta}\partial_{\gamma}W_{ij} + \left(x_{ij}^{\alpha}\delta^{\beta \gamma} + x_{ij}^{\beta} \delta^{\alpha \gamma} \right)W_{ij} \right]\]Parameters: - dim (int) – Dimensionality of the problem.
- tol (float) – Tolerence value to decide std or corrected kernel
-
class
pysph.sph.wc.crksph.
CRKSPHIntegrator
(**kw)[source]¶ Bases:
pysph.sph.integrator.Integrator
Pass fluid names and suitable IntegratorStep instances.
For example:
>>> integrator = Integrator(fluid=WCSPHStep(), solid=WCSPHStep())
where “fluid” and “solid” are the names of the particle arrays.
-
one_timestep
(t, dt)[source]¶ User written function that actually does one timestep.
This function is used in the high-performance Cython implementation. The assumptions one may make are the following:
t and dt are passed.
the following methods are available:
- self.initialize()
- self.stage1(), self.stage2() etc. depending on the number of stages available.
- self.compute_accelerations(index=0, update_nnps=True)
- self.do_post_stage(stage_dt, stage_count_from_1)
- self.update_domain()
Please see any of the concrete implementations of the Integrator class to study. By default the Integrator implements a predict-evaluate-correct method, the same as PECIntegrator.
-
-
class
pysph.sph.wc.crksph.
CRKSPHPreStep
(dest, sources, dim=2)[source]¶ Bases:
pysph.sph.equation.Equation
-
class
pysph.sph.wc.crksph.
CRKSPHScheme
(fluids, dim, rho0, c0, nu, h0, p0, gx=0.0, gy=0.0, gz=0.0, cl=2, cq=1, gamma=7.0, eta_crit=0.3, eta_fold=0.2, tol=0.5, has_ghosts=False)[source]¶ Bases:
pysph.sph.scheme.Scheme
Parameters: - fluids (list) – a list with names of fluid particle arrays
- solids (list) – a list with names of solid (or boundary) particle arrays
-
configure_solver
(kernel=None, integrator_cls=None, extra_steppers=None, **kw)[source]¶ Configure the solver to be generated.
Parameters: - kernel (Kernel instance.) – Kernel to use, if none is passed a default one is used.
- integrator_cls (pysph.sph.integrator.Integrator) – Integrator class to use, use sensible default if none is passed.
- extra_steppers (dict) – Additional integration stepper instances as a dict.
- **kw (extra arguments) – Any additional keyword args are passed to the solver instance.
-
class
pysph.sph.wc.crksph.
CRKSPHSymmetric
(dest, sources, dim=2, tol=0.5)[source]¶ Bases:
pysph.sph.equation.Equation
Conservative Reproducing Kernel SPH
This is symmetric and will only work for particles of the same array.
Equations from the paper [CRKSPH2017].
\[W_{ij}^{R} = A_{i}\left(1+B_{i}^{\alpha}x_{ij}^{\alpha} \right)W_{ij}\]\[\partial_{\gamma}W_{ij}^{R} = A_{i}\left(1+B_{i}^{\alpha} x_{ij}^{\alpha}\right)\partial_{\gamma}W_{ij} + \partial_{\gamma}A_{i}\left(1+B_{i}^{\alpha}x_{ij}^{\alpha} \right)W_{ij} + A_{i}\left(\partial_{\gamma}B_{i}^{\alpha} x_{ij}^{\alpha} + B_{i}^{\gamma}\right)W_{ij}\]\[\nabla\tilde{W}_{ij} = 0.5 * \left(\nabla W_{ij}^{R}-\nabla W_{ji}^{R} \right)\]where,
\[A_{i} = \left[m_{0} - \left(m_{2}^{-1}\right)^{\alpha \beta} m1_{\beta}m1_{\alpha}\right]^{-1}\]\[B_{i}^{\alpha} = -\left(m_{2}^{-1}\right)^{\alpha \beta} m_{1}^{\beta}\]\[\partial_{\gamma}A_{i} = -A_{i}^{2}\left(\partial_{\gamma} m_{0}-\left[m_{2}^{-1}\right]^{\alpha \beta}\left[ m_{1}^{\beta}\partial_{\gamma}m_{1}^{\beta}m_{1}^{\alpha} + \partial_{\gamma}m_{1}^{\alpha}m_{1}^{\beta}\right] + \left[m_{2}^{-1}\right]^{\alpha \phi}\partial_{\gamma} m_{2}^{\phi \psi}\left[m_{2}^{-1}\right]^{\psi \beta} m_{1}^{\beta}m_{1}^{\alpha} \right)\]\[\partial_{\gamma}B_{i}^{\alpha} = -\left(m_{2}^{-1}\right)^ {\alpha \beta}\partial_{\gamma}m_{1}^{\beta} + \left(m_{2}^{-1}\right)^ {\alpha \phi}\partial_{\gamma}m_{2}^{\phi \psi}\left(m_{2}^ {-1}\right)^{\psi \beta}m_{1}^{\beta}\]\[m_{0} = \sum_{j}V_{j}W_{ij}\]\[m_{1}^{\alpha} = \sum_{j}V_{j}x_{ij}^{\alpha}W_{ij}\]\[m_{2}^{\alpha \beta} = \sum_{j}V_{j}x_{ij}^{\alpha} x_{ij}^{\beta}W_{ij}\]\[\partial_{\gamma}m_{0} = \sum_{j}V_{j}\partial_{\gamma} W_{ij}\]\[\partial_{\gamma}m_{1}^{\alpha} = \sum_{j}V_{j}\left[ x_{ij}^{\alpha}\partial_{\gamma}W_{ij}+\delta^ {\alpha \gamma}W_{ij} \right]\]\[\partial_{\gamma}m_{2}^{\alpha \beta} = \sum_{j}V_{j}\left[ x_{ij}^{\alpha}x_{ij}^{\beta}\partial_{\gamma}W_{ij} + \left(x_{ij}^{\alpha}\delta^{\beta \gamma} + x_{ij}^{\beta} \delta^{\alpha \gamma} \right)W_{ij} \right]\]
-
class
pysph.sph.wc.crksph.
CRKSPHUpdateGhostProps
(dest, sources=None, dim=2)[source]¶ Bases:
pysph.sph.equation.Equation
-
class
pysph.sph.wc.crksph.
EnergyEquation
(dest, sources, dim, gamma, gx=0.0, gy=0.0, gz=0.0, cl=2, cq=1, eta_crit=0.5, eta_fold=0.2, tol=0.5)[source]¶ Bases:
pysph.sph.equation.Equation
Energy Equation
From [CRKSPH2017], equation (66):
\[\Delta u_{ij} = \frac{f_{ij}}{2}\left(v_j^{\alpha}(t) + v_j^{\alpha}(t + \Delta t) - v_i^{\alpha}(t) - v_i^{\alpha}(t + \Delta t)\right) \frac{Dv_{ij}^{\alpha}}{Dt}\]\[\begin{split}f_{ij} = \begin{cases} 1/2 &|s_i - s_j| = 0,\\ s_{\min} / (s_{\min} + s_{\max}) &\Delta u_{ij}\times(s_i - s_j) > 0\\ s_{\max} / (s_{\min} + s_{\max}) &\Delta u_{ij}\times(s_i - s_j) < 0\\ \end{cases}\end{split}\]\[s_{\min} = \min(|s_i|, |s_j|)\]\[s_{\max} = \max(|s_i|, |s_j|)\]\[s_{i/j} = \frac{p_{i/j}}{\rho_{i/j}^\gamma}\]see MomentumEquation for \(\frac{Dv_{ij}^{\alpha}}{Dt}\)
-
class
pysph.sph.wc.crksph.
MomentumEquation
(dest, sources, dim, gx=0.0, gy=0.0, gz=0.0, cl=2, cq=1, eta_crit=0.3, eta_fold=0.2, tol=0.5)[source]¶ Bases:
pysph.sph.equation.Equation
Momentum Equation
From [CRKSPH2017], equation (64):
\[\frac{Dv_{i}^{\alpha}}{Dt} = -\frac{1}{2 m_i}\sum_{j} V_i V_j (P_i + P_j + Q_i + Q_j) (\partial_{\alpha}W_{ij}^R - \partial_{\alpha} W_{ji}^R)\]where,
\[V_{i/j} = \text{dest/source particle number density}\]\[P_{i/j} = \text{dest/source particle pressure}\]\[Q_i = \rho_{i} (-C_{l} c_{i} \mu_{i} + C_{q} \mu_{i}^{2})\]\[\mu_i = \min \left(0, \frac{\hat{v}_{ij} \eta_{i}^{\alpha}}{\eta_{i}^{\alpha}\eta_{i}^{\alpha} + \epsilon^{2}}\right)\]\[\hat{v}_{ij}^{\alpha} = v_{i}^{\alpha} - v_{j}^{\alpha} - \frac{\phi_{ij}}{2}\left(\partial_{\beta} v_i^{\alpha} + \partial_{\beta}v_j^{\alpha}\right) x_{ij}^{\beta}\]\[\begin{split}\phi_{ij} = \max \left[0, \min \left[1, \frac{4r_{ij}}{(1 + r_{ij})^2}\right]\right] \times \begin{cases} \exp{\left[-\left((\eta_{ij} - \eta_{crit})/\eta_{fold}\right)^2\right]}, &\eta_{ij} < \eta_{crit} \\ 1, & \eta_{ij} >= \eta_{crit} \end{cases}\end{split}\]\[\eta_{ij} = \min(\eta_i, \eta_j)\]\[\eta_{i/j} = (x_{ij}^{\alpha} x_{ij}^{\alpha})^{1/2} / h_{i/j}\]\[r_{ij} = \frac{\partial_{\beta} v_i^{\alpha} x_{ij}^{\alpha} x_{ij}^{\beta}}{\partial_{\beta} v_j^{\alpha}x_{ij}^{\alpha} x_{ij}^{\beta}}\]\[\partial_{\beta} v_i^{\alpha} = -\sum_j V_j v_{ij}^{\alpha} \partial_{\beta} W_{ij}^R\]
-
class
pysph.sph.wc.crksph.
NumberDensity
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Number Density
From [CRKSPH2017], equation (75):
\[V_{i}^{-1} = \sum_{j} W_{i}\]Note that the quantity V is the inverse of particle volume, so when using in the equation use \(\frac{1}{V}\).
Parameters: - dest (str) – name of the destination particle array
- sources (list of str or None) – names of the source particle arrays
-
class
pysph.sph.wc.crksph.
SpeedOfSound
(dest, sources=None, gamma=7.0)[source]¶ Bases:
pysph.sph.equation.Equation
-
class
pysph.sph.wc.crksph.
StateEquation
(dest, sources, gamma)[source]¶ Bases:
pysph.sph.equation.Equation
State Equation
State equation for ideal gas, from [CRKSPH2017] equation (77):
\[p_i = (\gamma - 1)\rho_{i} u_i\]where, \(u_i\) is the specific thermal energy.
-
class
pysph.sph.wc.crksph.
SummationDensityCRKSPH
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Summation Density CRKSPH
From [CRKSPH2017], equation (76):
\[\rho_{i} = \frac{\sum_j m_{ij} V_j W_{ij}^R} {\sum_{j} V_{j}^{2} W_{ij}^R}\]where,
\[\begin{split}mij = \begin{cases} m_j, &i \text{ and } j \text{ are same material} \\ m_i, &i \text{ and } j \text{ are different material} \end{cases}\end{split}\]Note that in this equation we are just using \(m_{ij}\) to be \(m_i\) as the mass remains constant through out the simulation.
Parameters: - dest (str) – name of the destination particle array
- sources (list of str or None) – names of the source particle arrays
-
class
pysph.sph.wc.crksph.
VelocityGradient
(dest, sources, dim)[source]¶ Bases:
pysph.sph.equation.Equation
Velocity Gradient
From [CRKSPH2017], equation (74)
\[\partial_{\beta} v_i^{\alpha} = -\sum_j V_j v_{ij}^{\alpha} \partial_{\beta} W_{ij}^R\]Parameters: dim (int) – Dimensionality of the Problem.
Predictive-Corrective Incompressible SPH (PCISPH)¶
References
[SolPaj2009] | B. Solenthaler, R. Pajarola “Predictive-Corrective Incompressible SPH”, ACM Trans. Graph 28 (2009), pp. 1–6. |
-
class
pysph.sph.wc.pcisph.
ComputePressure
(dest, sources, rho0)[source]¶ Bases:
pysph.sph.equation.Equation
Compute Pressure
Compute pressure iteratively maintaining density within a given tolerance.
\[p_i += \delta \rho^{*}_{{err}_i}\]where,
\[\rho_{err_i} = \rho_i^{*} - \rho_0\]\[\delta = \frac{-1}{\beta (-\sum_j \nabla W_{ij} \cdot \sum_j \nabla W_{ij} - \sum_j \nabla W_{ij} \nabla W_{ij})}\]
-
class
pysph.sph.wc.pcisph.
MomentumEquationPressureGradient
(dest, sources, rho0, tolerance, debug)[source]¶ Bases:
pysph.sph.equation.Equation
Momentum Equation pressure gradient
Standard WCSPH pressure gradient,
\[\frac{d\mathbf{v}}{dt} = - \sum_j m_j \left(\frac{p_i}{\rho_i^2} + \frac{p_i}{\rho_i^2}\right) \nabla W(x_{ij}, h)\]
-
class
pysph.sph.wc.pcisph.
MomentumEquationViscosity
(dest, sources, nu=0.0, gx=0.0, gy=0.0, gz=0.0)[source]¶ Bases:
pysph.sph.equation.Equation
Momentum Equation Viscosity
See “pysph.sph.wc.viscosity.LaminarViscocity”
-
class
pysph.sph.wc.pcisph.
PCISPHIntegrator
(**kw)[source]¶ Bases:
pysph.sph.integrator.Integrator
Pass fluid names and suitable IntegratorStep instances.
For example:
>>> integrator = Integrator(fluid=WCSPHStep(), solid=WCSPHStep())
where “fluid” and “solid” are the names of the particle arrays.
-
initial_acceleration
(t, dt)[source]¶ Compute the initial accelerations if needed before the iterations start.
The default implementation only does this for the first acceleration evaluator. So if you have multiple evaluators, you must override this method in a subclass.
-
one_timestep
(t, dt)[source]¶ User written function that actually does one timestep.
This function is used in the high-performance Cython implementation. The assumptions one may make are the following:
t and dt are passed.
the following methods are available:
- self.initialize()
- self.stage1(), self.stage2() etc. depending on the number of stages available.
- self.compute_accelerations(index=0, update_nnps=True)
- self.do_post_stage(stage_dt, stage_count_from_1)
- self.update_domain()
Please see any of the concrete implementations of the Integrator class to study. By default the Integrator implements a predict-evaluate-correct method, the same as PECIntegrator.
-
-
class
pysph.sph.wc.pcisph.
PCISPHScheme
(fluids, dim, rho0, nu, gx=0.0, gy=0.0, gz=0.0, tolerance=0.1, debug=False, show_itercount=False)[source]¶ Bases:
pysph.sph.scheme.Scheme
-
configure_solver
(kernel=None, integrator_cls=None, extra_steppers=None, **kw)[source]¶ Configure the solver to be generated.
Parameters: - kernel (Kernel instance.) – Kernel to use, if none is passed a default one is used.
- integrator_cls (pysph.sph.integrator.Integrator) – Integrator class to use, use sensible default if none is passed.
- extra_steppers (dict) – Additional integration stepper instances as a dict.
- **kw (extra arguments) – Any additional keyword args are passed to the solver instance.
-
-
class
pysph.sph.wc.pcisph.
PCISPHStep
(show_itercount=False)[source]¶
-
class
pysph.sph.wc.pcisph.
Predict
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Predict velocity and position
\[\mathbf{v}^{*}(t+1) = \mathbf{v}(t) + dt \left(\frac{d \mathbf{v}_{visc, g}(t)}{dt} + \frac{d \mathbf{v}_{p} (t)}{dt} \right)\]\[\mathbf{x}^{*}(t+1) = \mathbf{x}(t) + dt * \mathbf{v}(t+1)\]Parameters: - dest (str) – name of the destination particle array
- sources (list of str or None) – names of the source particle arrays
SPH Boundary Equations¶
-
class
pysph.sph.boundary_equations.
MonaghanBoundaryForce
(dest, sources, deltap)[source]¶ Bases:
pysph.sph.equation.Equation
-
class
pysph.sph.boundary_equations.
MonaghanKajtarBoundaryForce
(dest, sources, K=None, beta=None, h=None)[source]¶ Bases:
pysph.sph.equation.Equation
Basic Equations for Solid Mechanics¶
References
[Gray2001] | J. P. Gray et al., “SPH elastic dynamics”, Computer Methods in Applied Mechanics and Engineering, 190 (2001), pp 6641 - 6662. |
-
class
pysph.sph.solid_mech.basic.
ElasticSolidsScheme
(elastic_solids, solids, dim, artificial_stress_eps=0.3, xsph_eps=0.5, alpha=1.0, beta=1.0)[source]¶ Bases:
pysph.sph.scheme.Scheme
-
configure_solver
(kernel=None, integrator_cls=None, extra_steppers=None, **kw)[source]¶ Configure the solver to be generated.
Parameters: - kernel (Kernel instance.) – Kernel to use, if none is passed a default one is used.
- integrator_cls (pysph.sph.integrator.Integrator) – Integrator class to use, use sensible default if none is passed.
- extra_steppers (dict) – Additional integration stepper instances as a dict.
- **kw (extra arguments) – Any additional keyword args are passed to the solver instance.
-
-
class
pysph.sph.solid_mech.basic.
EnergyEquationWithStress
(dest, sources, alpha=1.0, beta=1.0, eta=0.01)[source]¶ Bases:
pysph.sph.equation.Equation
-
class
pysph.sph.solid_mech.basic.
HookesDeviatoricStressRate
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Rate of change of stress
\[\frac{dS^{ij}}{dt} = 2\mu\left(\epsilon^{ij} - \frac{1}{3}\delta^{ij} \epsilon^{ij}\right) + S^{ik}\Omega^{jk} + \Omega^{ik}S^{kj}\]where
\[ \begin{align}\begin{aligned}\begin{split}\epsilon^{ij} = \frac{1}{2}\left(\frac{\partial v^i}{\partial x^j} + \frac{\partial v^j}{\partial x^i}\right)\\\end{split}\\\Omega^{ij} = \frac{1}{2}\left(\frac{\partial v^i}{\partial x^j} - \frac{\partial v^j}{\partial x^i} \right)\end{aligned}\end{align} \]Parameters: - dest (str) – name of the destination particle array
- sources (list of str or None) – names of the source particle arrays
-
class
pysph.sph.solid_mech.basic.
IsothermalEOS
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Compute the pressure using the Isothermal equation of state:
\(p = p_0 + c_0^2(\rho_0 - \rho)\)
Parameters: - dest (str) – name of the destination particle array
- sources (list of str or None) – names of the source particle arrays
-
class
pysph.sph.solid_mech.basic.
MomentumEquationWithStress
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Momentum Equation with Artificial Stress
\[\frac{D\vec{v_a}^i}{Dt} = \sum_b m_b\left(\frac{\sigma_a^{ij}}{\rho_a^2} +\frac{\sigma_b^{ij}}{\rho_b^2} + R_{ab}^{ij}f^n \right)\nabla_a W_{ab}\]where
\[ \begin{align}\begin{aligned}\begin{split}f_{ab} = \frac{W(r_{ab})}{W(\Delta p)}\\\end{split}\\R_{ab}^{ij} = R_{a}^{ij} + R_{b}^{ij}\end{aligned}\end{align} \]Parameters: - dest (str) – name of the destination particle array
- sources (list of str or None) – names of the source particle arrays
-
class
pysph.sph.solid_mech.basic.
MonaghanArtificialStress
(dest, sources, eps=0.3)[source]¶ Bases:
pysph.sph.equation.Equation
Artificial stress to remove tensile instability
The dispersion relations in [Gray2001] are used to determine the different components of \(R\).
Angle of rotation for particle \(a\)
\[\tan{2 \theta_a} = \frac{2\sigma_a^{xy}}{\sigma_a^{xx} - \sigma_a^{yy}}\]In rotated frame, the new components of the stress tensor are
\[ \begin{align}\begin{aligned}\begin{split}\bar{\sigma}_a^{xx} = \cos^2{\theta_a} \sigma_a^{xx} + 2\sin{\theta_a} \cos{\theta_a}\sigma_a^{xy} + \sin^2{\theta_a}\sigma_a^{yy}\\\end{split}\\\bar{\sigma}_a^{yy} = \sin^2{\theta_a} \sigma_a^{xx} + 2\sin{\theta_a} \cos{\theta_a}\sigma_a^{xy} + \cos^2{\theta_a}\sigma_a^{yy}\end{aligned}\end{align} \]Components of \(R\) in rotated frame:
\[ \begin{align}\begin{aligned}\begin{split}\bar{R}_{a}^{xx}=\begin{cases}-\epsilon\frac{\bar{\sigma}_{a}^{xx}} {\rho^{2}} & \bar{\sigma}_{a}^{xx}>0\\0 & \bar{\sigma}_{a}^{xx}\leq0 \end{cases}\\\end{split}\\\begin{split}\bar{R}_{a}^{yy}=\begin{cases}-\epsilon\frac{\bar{\sigma}_{a}^{yy}} {\rho^{2}} & \bar{\sigma}_{a}^{yy}>0\\0 & \bar{\sigma}_{a}^{yy}\leq0 \end{cases}\end{split}\end{aligned}\end{align} \]Components of \(R\) in original frame:
\[ \begin{align}\begin{aligned}\begin{split}R_a^{xx} = \cos^2{\theta_a} \bar{R}_a^{xx} + \sin^2{\theta_a} \bar{R}_a^{yy}\\\end{split}\\\begin{split}R_a^{yy} = \sin^2{\theta_a} \bar{R}_a^{xx} + \cos^2{\theta_a} \bar{R}_a^{yy}\\\end{split}\\R_a^{xy} = \sin{\theta_a} \cos{\theta_a}\left(\bar{R}_a^{xx} - \bar{R}_a^{yy}\right)\end{aligned}\end{align} \]Parameters: eps (float) – constant
-
pysph.sph.solid_mech.basic.
get_bulk_mod
(G, nu)[source]¶ Get the bulk modulus from shear modulus and Poisson ratio
-
pysph.sph.solid_mech.basic.
get_particle_array_elastic_dynamics
(constants=None, **props)[source]¶ Return a particle array for the Standard SPH formulation of solids.
Parameters: constants (dict) – Dictionary of constants Other Parameters: props (dict) – Additional keywords passed are set as the property arrays. See also
get_particle_array()
Equations for the High Velocity Impact Problems¶
-
class
pysph.sph.solid_mech.hvi.
MieGruneisenEOS
(dest, sources, gamma, r0, c0, S)[source]¶ Bases:
pysph.sph.equation.Equation
-
class
pysph.sph.solid_mech.hvi.
StiffenedGasEOS
(dest, sources, gamma, r0, c0)[source]¶ Bases:
pysph.sph.equation.Equation
Stiffened-gas EOS from “A Free Lagrange Augmented Godunov Method for the Simulation of Elastic-Plastic Solids”, B. P. Howell and G. J. Ball, JCP (2002). http://dx.doi.org/10.1006/jcph.2001.6931
-
class
pysph.sph.solid_mech.hvi.
VonMisesPlasticity2D
(dest, sources, flow_stress)[source]¶ Bases:
pysph.sph.equation.Equation
Gas Dynamics¶
Basic equations for Gas-dynamics
-
class
pysph.sph.gas_dynamics.basic.
ADKEAccelerations
(dest, sources, alpha, beta, g1, g2, k, eps)[source]¶ Bases:
pysph.sph.equation.Equation
ADKE as discussed in the reference [KP14].
References
[KP14] (1, 2, 3) A comparison of SPH schemes for the compressible Euler equations, 2014, Journal of Computational Physics, 256, pp 308 – 333 (http://dx.doi.org/10.1016/j.jcp.2013.08.060)
-
class
pysph.sph.gas_dynamics.basic.
ADKEUpdateGhostProps
(dest, sources=None, dim=2)[source]¶ Bases:
pysph.sph.equation.Equation
-
class
pysph.sph.gas_dynamics.basic.
IdealGasEOS
(dest, sources, gamma)[source]¶ Bases:
pysph.sph.equation.Equation
-
class
pysph.sph.gas_dynamics.basic.
MPMAccelerations
(dest, sources, beta=2.0, update_alpha1=False, update_alpha2=False, alpha1_min=0.1, alpha2_min=0.1, sigma=0.1)[source]¶ Bases:
pysph.sph.equation.Equation
-
class
pysph.sph.gas_dynamics.basic.
MPMUpdateGhostProps
(dest, sources=None, dim=2)[source]¶ Bases:
pysph.sph.equation.Equation
-
class
pysph.sph.gas_dynamics.basic.
Monaghan92Accelerations
(dest, sources, alpha=1.0, beta=2.0)[source]¶ Bases:
pysph.sph.equation.Equation
-
class
pysph.sph.gas_dynamics.basic.
ScaleSmoothingLength
(dest, sources, factor=2.0)[source]¶ Bases:
pysph.sph.equation.Equation
-
class
pysph.sph.gas_dynamics.basic.
SummationDensity
(dest, sources, dim, density_iterations=False, iterate_only_once=False, k=1.2, htol=1e-06)[source]¶ Bases:
pysph.sph.equation.Equation
Summation density with iterative solution of the smoothing lengths.
Parameters:
- density_iterations : bint
- Flag to indicate density iterations are required.
- iterate_only_once : bint
- Flag to indicate if only one iteration is required
- k : double
- Kernel scaling factor
- htol : double
- Iteration tolerance
-
class
pysph.sph.gas_dynamics.basic.
SummationDensityADKE
(dest, sources, k=1.0, eps=0.0)[source]¶ Bases:
pysph.sph.equation.Equation
References
-
class
pysph.sph.gas_dynamics.basic.
UpdateSmoothingLengthFromVolume
(dest, sources, dim, k=1.2)[source]¶ Bases:
pysph.sph.equation.Equation
Boundary equations for Gas-dynamics
-
class
pysph.sph.gas_dynamics.boundary_equations.
WallBoundary
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Parameters: - dest (str) – name of the destination particle array
- sources (list of str or None) – names of the source particle arrays
-
initialize
(d_idx, d_p, d_rho, d_e, d_m, d_cs, d_div, d_h, d_htmp, d_h0, d_u, d_v, d_w, d_wij)[source]¶
Surface tension¶
Implementation of the equations used for surface tension modelling, for example in KHI simulations. The references are [SY11], [JM00], [A10], [XA06].
References
[SY11] | M. Shadloo, M. Yildiz, “Numerical modelling of Kelvin-Helmholtz instability using smoothed particle hydrodynamics”, IJNME, 2011, 87, pp 988–1006 |
[JM00] | Joseph P. Morris “Simulating surface tension with smoothed particle hydrodynamics”, JCP, 2000, 33, pp 333–353 |
[A10] | Adami et al. “A new surface-tension formulation for multi-phase SPH using a reproducing divergence approximation”, JCP 2010, 229, pp 5011–5021 |
[XA06] | X.Y.Hu, N.A. Adams. “A multi-phase SPH method for macroscopic and mesoscopic flows”, JCP 2006, 213, pp 844-861 [XA06] |
-
class
pysph.sph.surface_tension.
AdamiColorGradient
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Gradient of color Eq. (14) in [A10]
\[\nabla c_a = \frac{1}{V_a}\sum_b \left[V_a^2 + V_b^2 \right]\tilde{c}_{ab}\nabla_a W_{ab}\,,\]where, the average \(\tilde{c}_{ab}\) is defined as
\[\tilde{c}_{ab} = \frac{\rho_b}{\rho_a + \rho_b}c_a + \frac{\rho_a}{\rho_a + \rho_b}c_b\]Parameters: - dest (str) – name of the destination particle array
- sources (list of str or None) – names of the source particle arrays
-
class
pysph.sph.surface_tension.
AdamiReproducingDivergence
(dest, sources, dim)[source]¶ Bases:
pysph.sph.equation.Equation
Reproducing divergence approximation Eq. (20) in [A10] to compute the curvature
\[\nabla \cdot \boldsymbol{\phi}_a = d\frac{\sum_b \boldsymbol{\phi}_{ab}\cdot \nabla_a W_{ab}V_b}{\sum_b\boldsymbol{x}_{ab}\cdot \nabla_a W_{ab} V_b}\]
-
class
pysph.sph.surface_tension.
CSFSurfaceTensionForce
(dest, sources, sigma=0.1)[source]¶ Bases:
pysph.sph.equation.Equation
Acceleration due to surface tension force Eq. (25) in [JM00]:
Note that as per Eq. (17) in [JM00], the un-normalized normal is basically the gradient of the color function. The acceleration term therefore depends on the gradient of the color field.
-
class
pysph.sph.surface_tension.
CSFSurfaceTensionForceAdami
(dest, sources, sigma)[source]¶ Bases:
pysph.sph.equation.Equation
-
class
pysph.sph.surface_tension.
ColorGradientAdami
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Parameters: - dest (str) – name of the destination particle array
- sources (list of str or None) – names of the source particle arrays
-
class
pysph.sph.surface_tension.
ColorGradientUsingNumberDensity
(dest, sources, epsilon=1e-06)[source]¶ Bases:
pysph.sph.equation.Equation
Gradient of the color function using Eq. (13) of [SY11]:
\[\nabla C_a = \sum_b \frac{2 C_b - C_a}{\psi_a + \psi_a} \nabla_{a} W_{ab}\]Using the gradient of the color function, the normal and discretized dirac delta is calculated in the post loop.
Singularities are avoided as per the recommendation by [JM00] (see eqs 20 & 21) using the parameter \(\epsilon\)
-
class
pysph.sph.surface_tension.
ConstructStressMatrix
(dest, sources, sigma, d=2)[source]¶ Bases:
pysph.sph.equation.Equation
-
class
pysph.sph.surface_tension.
InterfaceCurvatureFromDensity
(dest, sources, with_morris_correction=True)[source]¶ Bases:
pysph.sph.equation.Equation
-
class
pysph.sph.surface_tension.
InterfaceCurvatureFromNumberDensity
(dest, sources, with_morris_correction=True)[source]¶ Bases:
pysph.sph.equation.Equation
Interface curvature using number density. Eq. (15) in [SY11]:
\[\kappa_a = \sum_b \frac{2.0}{\psi_a + \psi_b} \left(\boldsymbol{n_a} - \boldsymbol{n_b}\right) \cdot \nabla_a W_{ab}\]
-
class
pysph.sph.surface_tension.
MomentumEquationPressureGradientAdami
(dest, sources, gx=0.0, gy=0.0, gz=0.0)[source]¶ Bases:
pysph.sph.equation.Equation
-
class
pysph.sph.surface_tension.
MomentumEquationPressureGradientHuAdams
(dest, sources, gx=0.0, gy=0.0, gz=0.0)[source]¶ Bases:
pysph.sph.equation.Equation
-
class
pysph.sph.surface_tension.
MomentumEquationPressureGradientMorris
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Parameters: - dest (str) – name of the destination particle array
- sources (list of str or None) – names of the source particle arrays
-
class
pysph.sph.surface_tension.
MomentumEquationViscosityAdami
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Parameters: - dest (str) – name of the destination particle array
- sources (list of str or None) – names of the source particle arrays
-
class
pysph.sph.surface_tension.
MomentumEquationViscosityMorris
(dest, sources, eta=0.01)[source]¶ Bases:
pysph.sph.equation.Equation
-
class
pysph.sph.surface_tension.
MorrisColorGradient
(dest, sources, epsilon=1e-06)[source]¶ Bases:
pysph.sph.equation.Equation
Gradient of the color function using Eq. (17) of [JM00]:
\[\nabla c_a = \sum_b \frac{m_b}{\rho_b}(c_b - c_a) \nabla_{a} W_{ab}\,,\]where a smoothed representation of the color is used in the equation. Using the gradient of the color function, the normal and discretized dirac delta is calculated in the post loop.
Singularities are avoided as per the recommendation by [JM00] (see eqs 20 & 21) using the parameter \(\epsilon\)
-
class
pysph.sph.surface_tension.
SY11ColorGradient
(dest, sources, epsilon=1e-06)[source]¶ Bases:
pysph.sph.equation.Equation
Gradient of the color function using Eq. (13) of [SY11]:
\[\nabla C_a = \sum_b \frac{2 C_b - C_a}{\psi_a + \psi_a} \nabla_{a} W_{ab}\]Using the gradient of the color function, the normal and discretized dirac delta is calculated in the post loop.
-
class
pysph.sph.surface_tension.
SY11DiracDelta
(dest, sources, epsilon=1e-06)[source]¶ Bases:
pysph.sph.equation.Equation
Discretized dirac-delta for the SY11 formulation Eq. (14) in [SY11]
This is essentially the same as computing the color gradient, the only difference being that this might be called with a reduced smoothing length.
Note that the normals should be computed using the SY11ColorGradient equation. This function will effectively overwrite the color gradient.
-
class
pysph.sph.surface_tension.
ShadlooViscosity
(dest, sources, alpha)[source]¶ Bases:
pysph.sph.equation.Equation
-
class
pysph.sph.surface_tension.
ShadlooYildizSurfaceTensionForce
(dest, sources, sigma=0.1)[source]¶ Bases:
pysph.sph.equation.Equation
Acceleration due to surface tension force Eq. (7,9) in [SY11]:
where, \(\delta^s\) is the discretized dirac delta function, \(\boldsymbol{n}\) is the interface normal, \(\kappa\) is the discretized interface curvature and \(\sigma\) is the surface tension force constant.
-
class
pysph.sph.surface_tension.
SmoothedColor
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Smoothed color function. Eq. (17) in [JM00]
\[c_a = \sum_b \frac{m_b}{\rho_b} c_b^i \nabla_a W_{ab}\,,\]where, \(c_b^i\) is the color index associated with a particle.
Parameters: - dest (str) – name of the destination particle array
- sources (list of str or None) – names of the source particle arrays
-
class
pysph.sph.surface_tension.
SolidWallPressureBCnoDensity
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Parameters: - dest (str) – name of the destination particle array
- sources (list of str or None) – names of the source particle arrays
-
class
pysph.sph.surface_tension.
SummationDensitySourceMass
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Parameters: - dest (str) – name of the destination particle array
- sources (list of str or None) – names of the source particle arrays
-
class
pysph.sph.surface_tension.
SurfaceForceAdami
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Parameters: - dest (str) – name of the destination particle array
- sources (list of str or None) – names of the source particle arrays
-
pysph.sph.surface_tension.
get_surface_tension_equations
(fluids, solids, scheme, rho0, p0, c0, b, factor1, factor2, nu, sigma, d, epsilon, gamma, real=False)[source]¶ This function returns the required equations for the multiphase formulation taking inputs of the fluid particles array, solid particles array, the scheme to be used and other physical parameters
Parameters: - fluids (list) – List of names of fluid particle arrays
- solids (list) – List of names of solid particle arrays
- scheme (string) –
The scheme with which the equations are to be setup. Supported Schemes:
- TVF scheme with Morris’ surface tension. String to be used: “tvf”
- Adami’s surface tension implementation which doesn’t involve calculation of curvature. String to be used: “adami_stress”
- Adami’s surface tension implementation which involves calculation of curvature. String to be used: “adami”
- Shadloo Yildiz surface tension formulation. String to be used: “shadloo”
- Morris’ surface tension formulation. This is the default scheme which will be used if none of the above strings are input as scheme.
- rho0 (float) – The reference density of the medium (Currently multiple reference densities for different particles is not supported)
- p0 (float) – The background pressure of the medium(Currently multiple background pressures for different particles is not supported)
- c0 (float) – The speed of sound of the medium(Currently multiple speeds of sounds for different particles is not supported)
- b (float) – The b parameter of the generalized Tait Equation of State. Refer to the Tait Equation’s documentation for reference
- factor1 (float) – The factor for scaling of smoothing length for calculation of interface curvature number for shadloo’s scheme
- factor2 (float) – The factor for scaling back of smoothing length for calculation of forces after calculating the interface curvature number in shadloo’s scheme
- nu (float) – The kinematic viscosity of the medium
- sigma (float) – The surface tension of the system
- d (int) – The number of dimensions of the problem in the cartesian space
- epsilon (float) – Put this option false if the equations are supposed to be evaluated for the ghost particles, else keep it True
Implicit Incompressible SPH¶
The basic equations for the IISPH formulation of
M. Ihmsen, J. Cornelis, B. Solenthaler, C. Horvath, M. Teschner, “Implicit Incompressible SPH,” IEEE Transactions on Visualization and Computer Graphics, vol. 20, no. 3, pp. 426-435, March 2014. http://dx.doi.org/10.1109/TVCG.2013.105
-
class
pysph.sph.iisph.
AdvectionAcceleration
(dest, sources, gx=0.0, gy=0.0, gz=0.0)[source]¶ Bases:
pysph.sph.equation.Equation
-
class
pysph.sph.iisph.
ComputeAII
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Parameters: - dest (str) – name of the destination particle array
- sources (list of str or None) – names of the source particle arrays
-
class
pysph.sph.iisph.
ComputeAIIBoundary
(dest, sources, rho0)[source]¶ Bases:
pysph.sph.equation.Equation
This is important and not really discussed in the original IISPH paper.
-
class
pysph.sph.iisph.
ComputeDII
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Parameters: - dest (str) – name of the destination particle array
- sources (list of str or None) – names of the source particle arrays
-
class
pysph.sph.iisph.
ComputeDIIBoundary
(dest, sources, rho0)[source]¶ Bases:
pysph.sph.equation.Equation
-
class
pysph.sph.iisph.
ComputeDIJPJ
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Parameters: - dest (str) – name of the destination particle array
- sources (list of str or None) – names of the source particle arrays
-
class
pysph.sph.iisph.
ComputeRhoAdvection
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Parameters: - dest (str) – name of the destination particle array
- sources (list of str or None) – names of the source particle arrays
-
class
pysph.sph.iisph.
ComputeRhoBoundary
(dest, sources, rho0)[source]¶ Bases:
pysph.sph.equation.Equation
-
class
pysph.sph.iisph.
IISPHScheme
(fluids, solids, dim, rho0, nu=0.0, gx=0.0, gy=0.0, gz=0.0, omega=0.5, tolerance=0.01, debug=False, has_ghosts=False)[source]¶ Bases:
pysph.sph.scheme.Scheme
The IISPH scheme
Parameters: - fluids (list(str)) – List of names of fluid particle arrays.
- solids (list(str)) – List of names of solid particle arrays.
- dim (int) – Dimensionality of the problem.
- rho0 (float) – Density of fluid.
- nu (float) – Kinematic viscosity.
- gy, gz (gx,) – Componenents of body acceleration (gravity, external forcing etc.)
- omega (float) – Relaxation parameter for relaxed-Jacobi iterations.
- tolerance (float) – Tolerance for the convergence of pressure iterations as a fraction.
- debug (bool) – Produce some debugging output on iterations.
- has_ghosts (bool) – The problem has ghost particles so add equations for those.
-
configure_solver
(kernel=None, integrator_cls=None, extra_steppers=None, **kw)[source]¶ Configure the solver to be generated.
This is to be called before get_solver is called.
Parameters: - dim (int) – Number of dimensions.
- kernel (Kernel instance.) – Kernel to use, if none is passed a default one is used.
- integrator_cls (pysph.sph.integrator.Integrator) – Integrator class to use, use sensible default if none is passed.
- extra_steppers (dict) – Additional integration stepper instances as a dict.
- **kw (extra arguments) – Any additional keyword args are passed to the solver instance.
-
class
pysph.sph.iisph.
IISPHStep
[source]¶ Bases:
pysph.sph.integrator_step.IntegratorStep
A straightforward and simple integrator to be used for IISPH.
-
class
pysph.sph.iisph.
NormalizedSummationDensity
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Parameters: - dest (str) – name of the destination particle array
- sources (list of str or None) – names of the source particle arrays
-
class
pysph.sph.iisph.
NumberDensity
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Parameters: - dest (str) – name of the destination particle array
- sources (list of str or None) – names of the source particle arrays
-
class
pysph.sph.iisph.
PressureForce
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Parameters: - dest (str) – name of the destination particle array
- sources (list of str or None) – names of the source particle arrays
-
class
pysph.sph.iisph.
PressureForceBoundary
(dest, sources, rho0)[source]¶ Bases:
pysph.sph.equation.Equation
-
class
pysph.sph.iisph.
PressureSolve
(dest, sources, rho0, omega=0.5, tolerance=0.01, debug=False)[source]¶ Bases:
pysph.sph.equation.Equation
-
class
pysph.sph.iisph.
PressureSolveBoundary
(dest, sources, rho0)[source]¶ Bases:
pysph.sph.equation.Equation
-
class
pysph.sph.iisph.
SummationDensity
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Parameters: - dest (str) – name of the destination particle array
- sources (list of str or None) – names of the source particle arrays
-
class
pysph.sph.iisph.
SummationDensityBoundary
(dest, sources, rho0)[source]¶ Bases:
pysph.sph.equation.Equation
-
class
pysph.sph.iisph.
UpdateGhostPressure
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Parameters: - dest (str) – name of the destination particle array
- sources (list of str or None) – names of the source particle arrays
-
class
pysph.sph.iisph.
UpdateGhostProps
(dest, sources=None)[source]¶ Bases:
pysph.sph.equation.Equation
-
class
pysph.sph.iisph.
ViscosityAcceleration
(dest, sources, nu)[source]¶ Bases:
pysph.sph.equation.Equation
-
class
pysph.sph.iisph.
ViscosityAccelerationBoundary
(dest, sources, rho0, nu)[source]¶ Bases:
pysph.sph.equation.Equation
The acceleration on the fluid due to a boundary.
Hopkins’ ‘Traditional’ SPH (TSPH)¶
References
[Hopkins2013] | (1, 2) Hopkins, Philip F. “A General Class of Lagrangian Smoothed Particle Hydrodynamics Methods and Implications for Fluid Mixing Problems.” Monthly Notices of the Royal Astronomical Society 428, no. 4 (February 1, 2013): 2840–56. https://doi.org/10.1093/mnras/sts210. |
[Hopkins2015] | (1, 2, 3, 4, 5, 6, 7, 8, 9, 10) Hopkins, Philip F. “A New Class of Accurate, Mesh-Free Hydrodynamic Simulation Methods.” Monthly Notices of the Royal Astronomical Society 450, no. 1 (June 11, 2015): 53–110. https://doi.org/10.1093/mnras/stv195. |
-
class
pysph.sph.gas_dynamics.tsph.
TSPHScheme
(fluids, solids, dim, gamma, hfact, beta=2.0, fkern=1.0, max_density_iterations=250, alphamax=1.0, density_iteration_tolerance=0.001, has_ghosts=False)[source]¶ Bases:
pysph.sph.scheme.Scheme
Density-energy formulation [Hopkins2013] including Balsara’s artificial viscocity switch with modifications, as presented in Appendix F1 of [Hopkins2015] .
Notes
- Is this exactly in accordance with what is proposed in [Hopkins2015] ?
- Not quite.
- What is different then?
- Adapting smoothing length using MPM [KP14] procedure from
SummationDensity
. In this, calculation of grad-h terms are changed to that specified for this scheme. - Using the PEC integrator step. No individual adaptive time-stepping.
- Using
Gaussian Kernel
by default instead of Cubic Spline with radius scale 1.
- Adapting smoothing length using MPM [KP14] procedure from
Tip: Reduce the number of points if particle penetration is encountered. This has to be done while running
gas_dynamics.wc_blastwave
andgas_dynamics.robert
.Parameters: - fluids (list) – List of names of fluid particle arrays.
- solids (list) – List of names of solid particle arrays (or boundaries), currently not supported
- dim (int) – Dimensionality of the problem.
- gamma (float) – \(\gamma\) for Equation of state.
- hfact (float) – \(h_{fact}\) for smoothing length adaptivity, also referred to as kernel_factor in other gas dynamics schemes.
- beta (float, optional) – \(\beta\) for artificial viscosity, by default 2.0
- fkern (float, optional) – \(f_{kern}\), Factor to scale smoothing length for equivalence with classic kernel when using kernel with altered radius_scale is being used, by default 1.
- max_density_iterations (int, optional) – Maximum number of iterations to run for one density step, by default 250.
- density_iteration_tolerance (float, optional) – Maximum difference allowed in two successive density iterations, by default 1e-3
- has_ghosts (bool, optional) – if ghost particles (either mirror or periodic) is used, by default False
- alphamax (float, optional) – \(\alpha_{av}\) for artificial viscosity switch, by default 1.0
-
configure_solver
(kernel=None, integrator_cls=None, extra_steppers=None, **kw)[source]¶ Configure the solver to be generated.
Parameters: - kernel (Kernel instance.) – Kernel to use, if none is passed a default one is used.
- integrator_cls (pysph.sph.integrator.Integrator) – Integrator class to use, use sensible default if none is passed.
- extra_steppers (dict) – Additional integration stepper instances as a dict.
- **kw (extra arguments) – Any additional keyword args are passed to the solver instance.
-
class
pysph.sph.gas_dynamics.tsph.
SummationDensity
(dest, sources, dim, density_iterations=False, iterate_only_once=False, hfact=1.2, htol=1e-06)[source]¶ Bases:
pysph.sph.equation.Equation
SummationDensity
modified to use number density for calculation of grad-h terms.Ref. Appendix F1 [Hopkins2015]
-
initialize
(d_idx, d_rho, d_arho, d_drhosumdh, d_n, d_dndh, d_prevn, d_prevdndh, d_prevdrhosumdh, d_an)[source]¶
-
-
class
pysph.sph.gas_dynamics.tsph.
IdealGasEOS
(dest, sources, gamma)[source]¶ Bases:
pysph.sph.equation.Equation
IdealGasEOS
modified to avoid repeated calculations usingloop()
. Doing the same usingpost_loop()
.
-
class
pysph.sph.gas_dynamics.tsph.
VelocityGradDivC1
(dest, sources, dim)[source]¶ Bases:
pysph.sph.equation.Equation
First Order consistent velocity gradient and divergence
-
class
pysph.sph.gas_dynamics.tsph.
BalsaraSwitch
(dest, sources, alphaav, fkern)[source]¶ Bases:
pysph.sph.equation.Equation
-
class
pysph.sph.gas_dynamics.tsph.
MomentumAndEnergy
(dest, sources, dim, fkern, beta=2.0)[source]¶ Bases:
pysph.sph.equation.Equation
TSPH Momentum and Energy Equations with artificial viscosity.
Possible typo in that has been taken care of:
Instead of Equation F3 [Hopkins2015] for evolution of total energy sans artificial viscosity and artificial conductivity,
\[\frac{\mathrm{d} E_{i}}{\mathrm{~d} t}=\boldsymbol{v}_{i} \cdot \frac{\mathrm{d} \boldsymbol{P}_{i}}{\mathrm{~d} t}- \sum_{j} m_{i} m_{j}\left(\boldsymbol{v}_{i}- \boldsymbol{v}_{j}\right) \cdot\left[\frac{P_{i}} {\bar{\rho}_{i}^{2}} f_{i, j} \nabla_{i} W_{i j}\left(h_{i}\right)\right],\]it should have been,
\[\frac{\mathrm{d} E_{i}}{\mathrm{~d} t}=\boldsymbol{v}_{i} \cdot \frac{\mathrm{d} \boldsymbol{P}_{i}}{\mathrm{~d} t}+ \sum_{j} m_{i} m_{j}\left(\boldsymbol{v}_{i}- \boldsymbol{v}_{j}\right) \cdot\left[\frac{P_{i}} {\bar{\rho}_{i}^{2}} f_{i, j} \nabla_{i} W_{i j}\left(h_{i}\right)\right].\]Specific thermal energy, \(u\), would therefore be evolved using,
\[\frac{\mathrm{d} u_{i}}{\mathrm{~d} t}= \sum_{j} m_{j}\left(\boldsymbol{v}_{i}- \boldsymbol{v}_{j}\right) \cdot\left[\frac{P_{i}} {\bar{\rho}_{i}^{2}} f_{i, j} \nabla_{i} W_{i j}\left(h_{i}\right)\right]\]
-
class
pysph.sph.gas_dynamics.tsph.
WallBoundary
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
WallBoundary
modified for TSPH.Most importantly, mass of the boundary particle should never be zero since it appears in denominator of fij. This has been addressed.
Parameters: - dest (str) – name of the destination particle array
- sources (list of str or None) – names of the source particle arrays
-
initialize
(d_idx, d_p, d_rho, d_e, d_m, d_cs, d_h, d_htmp, d_h0, d_u, d_v, d_w, d_wij, d_n, d_dndh, d_drhosumdh, d_divv, d_m0)[source]¶
-
class
pysph.sph.gas_dynamics.tsph.
UpdateGhostProps
(dest, sources=None, dim=2)[source]¶ Bases:
pysph.sph.equation.Equation
MPMUpdateGhostProps
modified for TSPH
Hopkins’ ‘Modern’ SPH (PSPH)¶
References
[CullenDehnen2010] | (1, 2) Cullen, Lee, and Walter Dehnen. “Inviscid Smoothed Particle Hydrodynamics: Inviscid Smoothed Particle Hydrodynamics.” Monthly Notices of the Royal Astronomical Society 408, no. 2 (October 21, 2010): 669–83. https://doi.org/10.1111/j.1365-2966.2010.17158.x. |
[ReadHayfield2012] | Read, J. I., and T. Hayfield. “SPHS: Smoothed Particle Hydrodynamics with a Higher Order Dissipation Switch: SPH with a Higher Order Dissipation Switch.” Monthly Notices of the Royal Astronomical Society 422, no. 4 (June 1, 2012): 3037–55. https://doi.org/10.1111/j.1365-2966.2012.20819.x. |
-
class
pysph.sph.gas_dynamics.psph.
PSPHScheme
(fluids, solids, dim, gamma, hfact, betab=2.0, fkern=1.0, max_density_iterations=250, alphac=0.25, density_iteration_tolerance=0.001, has_ghosts=False, alphamin=0.02, alphamax=2.0, betac=0.7, betad=0.05, betaxi=1.0)[source]¶ Bases:
pysph.sph.scheme.Scheme
Pressure-energy formulation [Hopkins2013] including Cullen-Dehnen artificial viscocity switch [CullenDehnen2010] with modifications, as presented in Appendix F2 of [Hopkins2015] .
Notes
- Is this exactly in accordance with what is proposed in [Hopkins2015] ?
- Not quite.
- What is different then?
- Adapting smoothing length using MPM [KP14] procedure from
SummationDensity
. In this, calculation of grad-h terms are changed to that specified for this scheme. - Using the PEC integrator step. No individual adaptive time-stepping.
- Using
Gaussian Kernel
by default instead of Cubic Spline with radius scale 1.
- Adapting smoothing length using MPM [KP14] procedure from
Tip: Reduce the number of points if particle penetration is encountered. This has to be done while running
gas_dynamics.wc_blastwave
andgas_dynamics.robert
.Parameters: - fluids (list) – List of names of fluid particle arrays.
- solids (list) – List of names of solid particle arrays (or boundaries), currently not supported
- dim (int) – Dimensionality of the problem.
- gamma (float) – \(\gamma\) for Equation of state.
- hfact (float) – \(h_{fact}\) for smoothing length adaptivity, also referred to as kernel_factor in other gas dynamics schemes.
- betab (float, optional) – \(\beta_b\) for artificial viscosity, by default 2.0
- fkern (float, optional) – \(f_{kern}\), Factor to scale smoothing length for equivalence with classic kernel when using kernel with altered radius_scale is being used, by default 1.
- max_density_iterations (int, optional) – Maximum number of iterations to run for one density step, by default 250.
- alphac (float, optional) – \(\alpha_c\) for artificial conductivity, by default 0.25
- density_iteration_tolerance (float, optional) – Maximum difference allowed in two successive density iterations, by default 1e-3
- has_ghosts (bool, optional) – if ghost particles (either mirror or periodic) is used, by default False
- alphamin (float, optional) – \(\alpha_{min}\) for artificial viscosity switch, by default 0.02
- alphamax (float, optional) – \(\alpha_{max}\) for artificial viscosity switch, by default 2.0
- betac (float, optional) – \(\beta_c\) for artificial viscosity switch, by default 0.7
- betad (float, optional) – \(\beta_d\) for artificial viscosity switch, by default 0.05
- betaxi (float, optional) – \(\beta_{\xi}\) for artificial viscosity switch, by default 1.0
-
configure_solver
(kernel=None, integrator_cls=None, extra_steppers=None, **kw)[source]¶ Configure the solver to be generated.
Parameters: - kernel (Kernel instance.) – Kernel to use, if none is passed a default one is used.
- integrator_cls (pysph.sph.integrator.Integrator) – Integrator class to use, use sensible default if none is passed.
- extra_steppers (dict) – Additional integration stepper instances as a dict.
- **kw (extra arguments) – Any additional keyword args are passed to the solver instance.
-
class
pysph.sph.gas_dynamics.psph.
PSPHSummationDensityAndPressure
(dest, sources, dim, gamma, density_iterations=False, iterate_only_once=False, hfact=1.2, htol=1e-06)[source]¶ Bases:
pysph.sph.equation.Equation
SummationDensity
modified to use number density for calculation of grad-h terms and to calculate pressure and speed of sound as well.Ref. Appendix F2 [Hopkins2015]
Parameters: - density_iterations (bint, optional) – Flag to indicate density iterations are required, by default False
- iterate_only_once (bint, optional) –
- Flag to indicate if only one iteration is required,
- by default False
- hfact (float, optional) – \(h_{fact}\), by default 1.2
- htol (double, optional) – Iteration tolerance, by default 1e-6
-
initialize
(d_idx, d_rho, d_arho, d_n, d_dndh, d_prevn, d_prevdndh, d_p, d_dpsumdh, d_dprevpsumdh, d_an)[source]¶
-
class
pysph.sph.gas_dynamics.psph.
GradientKinsfolkC1
(dest, sources, dim)[source]¶ Bases:
pysph.sph.equation.Equation
First order consistent,
- Velocity gradient, grad(v)
- Acceleration gradient, grad(a)
- Velocity divergence, div(v)
- Velocity divergence rate, d (div(v)) / dt
- Traceless symmetric strain rate, S
- trace(dot(S,transpose(S)))
Ref. Appendix B [CullenDehnen2010]
-
class
pysph.sph.gas_dynamics.psph.
LimiterAndAlphas
(dest, sources, alphamin=0.02, alphamax=2.0, betac=0.7, betad=0.05, betaxi=1.0, fkern=1.0)[source]¶ Bases:
pysph.sph.equation.Equation
Cullen Dehnen’s limiter for artificial viscosity modified by Hopkins.
Ref. Appendix F2 [Hopkins2015]
-
class
pysph.sph.gas_dynamics.psph.
MomentumAndEnergy
(dest, sources, dim, fkern, gamma, betab=2.0, alphac=0.25)[source]¶ Bases:
pysph.sph.equation.Equation
PSPH Momentum and Energy Equations with artificial viscosity and artificial conductivity.
Possible typos in that have been taken care of,
Instead of Equation F15 [Hopkins2015] for evolution of total energy sans artificial viscosity and artificial conductivity,
\[\frac{\mathrm{d} E_{i}}{\mathrm{~d} t}= \boldsymbol{v}_{i} \cdot \frac{\mathrm{d} \boldsymbol{P}_{i}}{\mathrm{~d} t}- \sum_{j=1}^{N}(\gamma-1)^{2} m_{i} m_{j} u_{i} u_{j} \frac{f_{i j}}{\bar{P}_{i}}\left(\boldsymbol{v}_{i}- \boldsymbol{v}_{j}\right) \cdot \nabla_{i} W_{i j} \left(h_{i}\right),\]it should have been,
\[\frac{\mathrm{d} E_{i}}{\mathrm{~d} t}= \boldsymbol{v}_{i} \cdot \frac{\mathrm{d} \boldsymbol{P}_{i}}{\mathrm{~d} t}+ \sum_{j=1}^{N}(\gamma-1)^{2} m_{i} m_{j} u_{i} u_{j} \frac{f_{i j}}{\bar{P}_{i}}\left(\boldsymbol{v}_{i}- \boldsymbol{v}_{j}\right) \cdot \nabla_{i} W_{i j} \left(h_{i}\right).\]Specific thermal energy, \(u\), would therefore be evolved using,
\[\frac{\mathrm{d} u_{i}}{\mathrm{~d} t}= \sum_{j=1}^{N}(\gamma-1)^{2} m_{j} u_{i} u_{j} \frac{f_{i j}}{\bar{P}_{i}}\left(\boldsymbol{v}_{i}- \boldsymbol{v}_{j}\right) \cdot \nabla_{i} W_{i j} \left(h_{i}\right).\]Equation F18 [Hopkins2015] for contribution of artificial viscosity to the evolution of total energy is,
\[\frac{\mathrm{d} E_{i}}{\mathrm{~d} t}= \alpha_{\mathrm{C}} \sum_{j} m_{i} m_{j} \alpha_{i j} \tilde{v}_{s}\left(u_{i}- u_{j}\right) \times \frac{\left|P_{i}-P_{j}\right|}{P_{i}+ P_{j}} \frac{\nabla_{i} W_{i j}\left(h_{i}\right)+ \nabla_{i} W_{i j}\left(h_{j}\right)}{\bar{\rho}_{i}+ \bar{\rho}_{j}} .\]Carefully comparing with [ReadHayfield2012] and [KP14], specific thermal energy, \(u\), should be evolved using,
\[\begin{split}\frac{\mathrm{d} u_{i}}{\mathrm{~d} t}= \alpha_{\mathrm{C}} \sum_{j} & m_{j} \alpha_{i j} \tilde{v}_{s}\left(u_{i}- u_{j}\right) \frac{\left|P_{i}-P_{j}\right|}{P_{i}+ P_{j}} \\ & \frac{\nabla_{i} W_{i j}\left(h_{i}\right)+ \nabla_{i} W_{i j}\left(h_{j}\right)}{\bar{\rho}_{i}+ \bar{\rho}_{j}} \cdot \frac{\left(\boldsymbol{x}_{i}- \boldsymbol{x}_{j}\right)}{\left|\boldsymbol{x}_{i}- \boldsymbol{x}_{j}\right|}\end{split}\]
-
class
pysph.sph.gas_dynamics.psph.
WallBoundary
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
WallBoundary
modified for PSPHParameters: - dest (str) – name of the destination particle array
- sources (list of str or None) – names of the source particle arrays
-
initialize
(d_idx, d_p, d_rho, d_e, d_m, d_cs, d_h, d_htmp, d_h0, d_u, d_v, d_w, d_wij, d_n, d_dndh, d_dpsumdh, d_m0)[source]¶
-
class
pysph.sph.gas_dynamics.psph.
UpdateGhostProps
(dest, sources=None, dim=2)[source]¶ Bases:
pysph.sph.equation.Equation
MPMUpdateGhostProps
modified for PSPH
MAGMA2¶
References
[Rosswog2009] | Rosswog, Stephan. “Astrophysical smooth particle hydrodynamics.” New Astronomy Reviews 53, no. 4-6 (2009): 78-104. https://doi.org/10.1016/j.newar.2009.08.007 |
[Rosswog2015] | Rosswog, Stephan. “Boosting the accuracy of SPH techniques: Newtonian and special-relativistic tests.” Monthly Notices of the Royal Astronomical Society 448, no. 4 (2015): 3628-3664. https://doi.org/10.1093/mnras/stv225. |
[Rosswog2020a] | Rosswog, Stephan. “A simple, entropy-based dissipation trigger for SPH.” The Astrophysical Journal 898, no. 1 (2020): 60. https://doi.org/10.3847/1538-4357/ab9a2e. |
[Rosswog2020b] | (1, 2, 3) Rosswog, Stephan. “The Lagrangian hydrodynamics code MAGMA2.” Monthly Notices of the Royal Astronomical Society 498, no. 3 (2020): 4230-4255. https://doi.org/10.1093/mnras/staa2591. |
-
class
pysph.sph.gas_dynamics.magma2.
IncreaseSmoothingLength
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Increase smoothing length by 10%.
Parameters: - dest (str) – name of the destination particle array
- sources (list of str or None) – names of the source particle arrays
-
class
pysph.sph.gas_dynamics.magma2.
UpdateSmoothingLength
(dest, sources, ndes)[source]¶ Bases:
pysph.sph.equation.Equation
Sorts neighbours based on distance and uses the distance of nearest \((n_{des} + 1)^{th}\) particle to set the smoothing length. Here, \(n_{des}\) is the desired number of neighbours to be in the kernel support of each particle.
-
class
pysph.sph.gas_dynamics.magma2.
SummationDensityMPMStyle
(dest, sources, dim, density_iterations=False, iterate_only_once=False, hfact=1.2, htol=1e-06)[source]¶ Bases:
pysph.sph.equation.Equation
SummationDensity
modified to use number density and without grad-h terms.
-
class
pysph.sph.gas_dynamics.magma2.
IdealGasEOS
(dest, sources, gamma)[source]¶ Bases:
pysph.sph.equation.Equation
IdealGasEOS
modified to avoid repeated calculations usingloop()
. Doing the same usingpost_loop()
.
-
class
pysph.sph.gas_dynamics.magma2.
AuxiliaryGradient
(dest, sources, dim)[source]¶ Bases:
pysph.sph.equation.Equation
Auxiliary first gradient calculated using analytical gradient of kernel and without using density.
-
class
pysph.sph.gas_dynamics.magma2.
CorrectionMatrix
(dest, sources, dim)[source]¶ Bases:
pysph.sph.equation.Equation
Correction matrix, C, that accounts for the local particle distribution and used in calculation of gradients without using analytical derivatives of kernel.
-
class
pysph.sph.gas_dynamics.magma2.
FirstGradient
(dest, sources, dim)[source]¶ Bases:
pysph.sph.equation.Equation
First gradient and divergence calculated using matrix inversion formulation without analytical derivative of the kernel.
-
class
pysph.sph.gas_dynamics.magma2.
SecondGradient
(dest, sources, dim)[source]¶ Bases:
pysph.sph.equation.Equation
Second gradient calculated from auxiliary gradient using matrix inversion formulation without analytical derivative of the kernel.
-
class
pysph.sph.gas_dynamics.magma2.
EntropyBasedDissipationTrigger
(dest, sources, alphamax, alphamin, fkern, l0, l1, gamma)[source]¶ Bases:
pysph.sph.equation.Equation
Simple, entropy-based dissipation trigger from [Rosswog2020a]
-
class
pysph.sph.gas_dynamics.magma2.
WallBoundary
(dest, sources, dim)[source]¶ Bases:
pysph.sph.equation.Equation
WallBoundary
modified for MAGMA2.-
initialize
(d_idx, d_p, d_rho, d_e, d_m, d_cs, d_h, d_htmp, d_h0, d_u, d_v, d_w, d_wij, d_n, d_dndh, d_divv, d_alpha, d_ddv, d_dv, d_de, d_cm, d_dde, d_rho0)[source]¶
-
-
class
pysph.sph.gas_dynamics.magma2.
MomentumAndEnergyStdGrad
(dest, sources, dim, fkern, eta_crit=0.3, eta_fold=0.2, beta=2.0, alphac=0.05, eps=0.01)[source]¶ Bases:
pysph.sph.gas_dynamics.magma2.MomentumAndEnergy
Standard Gradient formulation (stdGrad) momentum and energy equations with artificial viscosity and artificial conductivity from [Rosswog2020b]
-
class
pysph.sph.gas_dynamics.magma2.
MomentumAndEnergyMI2
(dest, sources, dim, fkern, eta_crit=0.3, eta_fold=0.2, beta=2.0, alphac=0.05, eps=0.01)[source]¶ Bases:
pysph.sph.gas_dynamics.magma2.MomentumAndEnergy
Matrix inversion formulation 2 (MI2) momentum and energy equations with artificial viscosity and artificial conductivity from [Rosswog2020b]
-
class
pysph.sph.gas_dynamics.magma2.
EvaluateTildeMu
(dest, sources, dim)[source]¶ Bases:
pysph.sph.equation.Equation
Find \(\tilde{\mu}\) to calculate time step.
-
class
pysph.sph.gas_dynamics.magma2.
SettleByArtificialPressure
(dest, sources, xi=0.5, fkern=1.0)[source]¶ Bases:
pysph.sph.equation.Equation
Equation 40 of [Rosswog2020b] . Combined with an equation to update density (and smoothing length, if preferred), this equation can be evaluated through
SPHEvaluator
to settle the particles and obtain an initial distribution.
Rigid body motion¶
Rigid body related equations.
-
class
pysph.sph.rigid_body.
AkinciRigidFluidCoupling
(dest, sources, fluid_rho=1000)[source]¶ Bases:
pysph.sph.equation.Equation
Force between a solid sphere and a SPH fluid particle. This is implemented using Akinci’s[1] force and additional force from solid bodies pressure which is implemented by Liu[2]
[1]’Versatile Rigid-Fluid Coupling for Incompressible SPH’
URL: https://graphics.ethz.ch/~sobarbar/papers/Sol12/Sol12.pdf
[2]A 3D Simulation of a Moving Solid in Viscous Free-Surface Flows by Coupling SPH and DEM
https://doi.org/10.1155/2017/3174904
- Note: Here forces for both the phases are added at once.
- Please make sure that this force is applied only once for both the particle properties.
-
class
pysph.sph.rigid_body.
BodyForce
(dest, sources, gx=0.0, gy=0.0, gz=0.0)[source]¶ Bases:
pysph.sph.equation.Equation
-
class
pysph.sph.rigid_body.
EulerStepRigidBody
[source]¶ Bases:
pysph.sph.integrator_step.IntegratorStep
Fast but inaccurate integrator. Use this for testing
-
class
pysph.sph.rigid_body.
LiuFluidForce
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Force between a solid sphere and a SPH fluid particle. This is implemented using Akinci’s[1] force and additional force from solid bodies pressure which is implemented by Liu[2]
[1]’Versatile Rigid-Fluid Coupling for Incompressible SPH’
URL: https://graphics.ethz.ch/~sobarbar/papers/Sol12/Sol12.pdf
[2]A 3D Simulation of a Moving Solid in Viscous Free-Surface Flows by Coupling SPH and DEM
https://doi.org/10.1155/2017/3174904
- Note: Here forces for both the phases are added at once.
- Please make sure that this force is applied only once for both the particle properties.
-
class
pysph.sph.rigid_body.
NumberDensity
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Parameters: - dest (str) – name of the destination particle array
- sources (list of str or None) – names of the source particle arrays
-
class
pysph.sph.rigid_body.
PressureRigidBody
(dest, sources, rho0)[source]¶ Bases:
pysph.sph.equation.Equation
The pressure acceleration on the fluid/solid due to a boundary. Implemented from Akinci et al. http://dx.doi.org/10.1145/2185520.2185558
Use this with the fluid as a destination and body as source.
-
class
pysph.sph.rigid_body.
RK2StepRigidBody
[source]¶ Bases:
pysph.sph.integrator_step.IntegratorStep
-
initialize
(d_idx, d_x, d_y, d_z, d_x0, d_y0, d_z0, d_omega, d_omega0, d_vc, d_vc0, d_num_body)[source]¶
-
-
class
pysph.sph.rigid_body.
RigidBodyCollision
(dest, sources, kn=1000.0, mu=0.5, en=0.8)[source]¶ Bases:
pysph.sph.equation.Equation
Force between two spheres is implemented using DEM contact force law.
Refer https://doi.org/10.1016/j.powtec.2011.09.019 for more information.
Open-source MFIX-DEM software for gas–solids flows: Part I—Verification studies .
Initialise the required coefficients for force calculation.
Keyword arguments: kn – Normal spring stiffness (default 1e3) mu – friction coefficient (default 0.5) en – coefficient of restitution (0.8)
Given these coefficients, tangential spring stiffness, normal and tangential damping coefficient are calculated by default.
-
class
pysph.sph.rigid_body.
RigidBodyForceGPUGems
(dest, sources, k=1.0, d=1.0, eta=1.0, kt=1.0)[source]¶ Bases:
pysph.sph.equation.Equation
This is inspired from http://http.developer.nvidia.com/GPUGems3/gpugems3_ch29.html and BK Mishra’s article on DEM http://dx.doi.org/10.1016/S0301-7516(03)00032-2 A review of computer simulation of tumbling mills by the discrete element method: Part I - contact mechanics
Note that d is a factor multiplied with the “h” of the particle.
-
class
pysph.sph.rigid_body.
RigidBodyMoments
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Parameters: - dest (str) – name of the destination particle array
- sources (list of str or None) – names of the source particle arrays
-
class
pysph.sph.rigid_body.
RigidBodyMotion
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Parameters: - dest (str) – name of the destination particle array
- sources (list of str or None) – names of the source particle arrays
-
class
pysph.sph.rigid_body.
RigidBodyWallCollision
(dest, sources, kn=1000.0, mu=0.5, en=0.8)[source]¶ Bases:
pysph.sph.equation.Equation
Force between sphere and a wall is implemented using DEM contact force law.
Refer https://doi.org/10.1016/j.powtec.2011.09.019 for more information.
Open-source MFIX-DEM software for gas–solids flows: Part I—Verification studies .
Initialise the required coefficients for force calculation.
Keyword arguments: kn – Normal spring stiffness (default 1e3) mu – friction coefficient (default 0.5) en – coefficient of restitution (0.8)
Given these coefficients, tangential spring stiffness, normal and tangential damping coefficient are calculated by default.
-
class
pysph.sph.rigid_body.
SummationDensityBoundary
(dest, sources, fluid_rho=1000.0)[source]¶ Bases:
pysph.sph.equation.Equation
Equation to find the density of the fluid particle due to any boundary or a rigid body
\(\rho_a = \sum_b {\rho}_fluid V_b W_{ab}\)
-
class
pysph.sph.rigid_body.
SummationDensityRigidBody
(dest, sources, rho0)[source]¶ Bases:
pysph.sph.equation.Equation
-
class
pysph.sph.rigid_body.
ViscosityRigidBody
(dest, sources, rho0, nu)[source]¶ Bases:
pysph.sph.equation.Equation
The viscous acceleration on the fluid/solid due to a boundary. Implemented from Akinci et al. http://dx.doi.org/10.1145/2185520.2185558
Use this with the fluid as a destination and body as source.
-
pysph.sph.rigid_body.
get_alpha_dot
()[source]¶ Use sympy to perform most of the math and use the resulting formulae to calculate:
inv(I) ( au - w x (I w))
Miscellaneous¶
Functions for advection¶
-
class
pysph.sph.misc.advection.
Advect
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Parameters: - dest (str) – name of the destination particle array
- sources (list of str or None) – names of the source particle arrays
-
class
pysph.sph.misc.advection.
MixingVelocityUpdate
(dest, sources, T)[source]¶ Bases:
pysph.sph.equation.Equation
Functions to reduce array data in serial or parallel.
-
pysph.base.reduce_array.
dummy_reduce_array
(array, op='sum')[source]¶ Simply returns the array for the serial case.
-
pysph.base.reduce_array.
mpi_reduce_array
(array, op='sum')[source]¶ Reduce an array given an array and a suitable reduction operation.
Currently, only ‘sum’, ‘max’, ‘min’ and ‘prod’ are supported.
Parameters
- array: numpy.ndarray: Any numpy array (1D).
- op: str: reduction operation, one of (‘sum’, ‘prod’, ‘min’, ‘max’)
-
pysph.base.reduce_array.
parallel_reduce_array
(array, op='sum')¶ Reduce an array given an array and a suitable reduction operation.
Currently, only ‘sum’, ‘max’, ‘min’ and ‘prod’ are supported.
Parameters
- array: numpy.ndarray: Any numpy array (1D).
- op: str: reduction operation, one of (‘sum’, ‘prod’, ‘min’, ‘max’)
-
pysph.base.reduce_array.
serial_reduce_array
(array, op='sum')[source]¶ Reduce an array given an array and a suitable reduction operation.
Currently, only ‘sum’, ‘max’, ‘min’ and ‘prod’ are supported.
Parameters
- array: numpy.ndarray: Any numpy array (1D).
- op: str: reduction operation, one of (‘sum’, ‘prod’, ‘min’, ‘max’)
Group of equations¶
-
class
pysph.sph.equation.
Group
(equations, real=True, update_nnps=False, iterate=False, max_iterations=1, min_iterations=0, pre=None, post=None, condition=None, start_idx=0, stop_idx=None, name=None)[source]¶ Bases:
object
A group of equations.
This class provides some support for the code generation for the collection of equations.
Constructor.
Parameters: - equations (list) – a list of equation objects.
- real (bool) – specifies if only non-remote/non-ghost particles should be operated on.
- update_nnps (bool) – specifies if the neighbors should be re-computed locally after this group
- iterate (bool) – specifies if the group should continue iterating until each equation’s “converged()” methods returns with a positive value.
- max_iterations (int) – specifies the maximum number of times this group should be iterated.
- min_iterations (int) – specifies the minimum number of times this group should be iterated.
- pre (callable) – A callable which is passed no arguments that is called before anything in the group is executed.
- post (callable) – A callable which is passed no arguments that is called after the group is completed.
- condition (callable) – A callable that is passed (t, dt). If this callable returns True, the group is executed, otherwise it is not. If condition is None, the group is always executed. Note that this should work even if the group has many destination arrays.
- start_idx (int or str) – Start looping from this destination index. Starts from the given number if an integer is passed. If a string is look for a property/constant and use its first value as the loop count.
- stop_idx (int or str) – Loop up to this destination index instead of over all possible values. Defaults to all particles. Ends at the given number if an integer is passed. If a string is passed, look for a property/constant and use its first value as the loop count. Note that this works like a range stop parameter so the last value is not included.
- name (str) – The passed string is used to name the Group in the profiling info csv file to make it easy to read. If a string is not passed it defaults to the name ‘Group’.
Notes
When running simulations in parallel, one should typically run the summation density over all particles (both local and remote) in each processor. This is because we must update the pressure/density of the remote neighbors in the current processor. Otherwise the results can be incorrect with the remote particles having an older density. This is also the case for the TaitEOS. In these cases the group that computes the equation should set real to False.
-
class
pysph.sph.equation.
MultiStageEquations
(groups)[source]¶ Bases:
object
A class that allows a user to specify different equations for different stages.
The object doesn’t do much, except contain the different collections of equations.
Parameters: groups (list/tuple) – A list/tuple of list of groups/equations, one for each stage.
Integrator related modules¶
Basic code for the templated integrators.
Currently we only support two-step integrators.
These classes are used to generate the code for the actual integrators from the sph_eval module.
-
class
pysph.sph.integrator.
EPECIntegrator
(**kw)[source]¶ Bases:
pysph.sph.integrator.Integrator
Predictor corrector integrators can have two modes of operation.
In the Evaluate-Predict-Evaluate-Correct (EPEC) mode, the system is advanced using:
\[ \begin{align}\begin{aligned}F(y^n) --> Evaluate\\y^{n+\frac{1}{2}} = y^n + F(y^n) --> Predict\\F(y^{n+\frac{1}{2}}) --> Evaluate\\y^{n+1} = y^n + \Delta t F(y^{n+\frac{1}{2}}) --> Correct\end{aligned}\end{align} \]Notes:
The Evaluate stage of the integrator forces a function evaluation. Therefore, the PEC mode is much faster but relies on old accelertions for the Prediction stage.
In the EPEC mode, the final corrector can be modified to:
\[y^{n+1} = y^n + \frac{\Delta t}{2}\left( F(y^n) + F(y^{n+\frac{1}{2}}) \right)\]This would require additional storage for the accelerations.
Pass fluid names and suitable IntegratorStep instances.
For example:
>>> integrator = Integrator(fluid=WCSPHStep(), solid=WCSPHStep())
where “fluid” and “solid” are the names of the particle arrays.
-
one_timestep
(t, dt)[source]¶ User written function that actually does one timestep.
This function is used in the high-performance Cython implementation. The assumptions one may make are the following:
t and dt are passed.
the following methods are available:
- self.initialize()
- self.stage1(), self.stage2() etc. depending on the number of stages available.
- self.compute_accelerations(index=0, update_nnps=True)
- self.do_post_stage(stage_dt, stage_count_from_1)
- self.update_domain()
Please see any of the concrete implementations of the Integrator class to study. By default the Integrator implements a predict-evaluate-correct method, the same as PECIntegrator.
-
-
class
pysph.sph.integrator.
EulerIntegrator
(**kw)[source]¶ Bases:
pysph.sph.integrator.Integrator
Pass fluid names and suitable IntegratorStep instances.
For example:
>>> integrator = Integrator(fluid=WCSPHStep(), solid=WCSPHStep())
where “fluid” and “solid” are the names of the particle arrays.
-
one_timestep
(t, dt)[source]¶ User written function that actually does one timestep.
This function is used in the high-performance Cython implementation. The assumptions one may make are the following:
t and dt are passed.
the following methods are available:
- self.initialize()
- self.stage1(), self.stage2() etc. depending on the number of stages available.
- self.compute_accelerations(index=0, update_nnps=True)
- self.do_post_stage(stage_dt, stage_count_from_1)
- self.update_domain()
Please see any of the concrete implementations of the Integrator class to study. By default the Integrator implements a predict-evaluate-correct method, the same as PECIntegrator.
-
-
class
pysph.sph.integrator.
Integrator
(**kw)[source]¶ Bases:
object
Generic class for multi-step integrators in PySPH for a system of ODES of the form \(\frac{dy}{dt} = F(y)\).
Pass fluid names and suitable IntegratorStep instances.
For example:
>>> integrator = Integrator(fluid=WCSPHStep(), solid=WCSPHStep())
where “fluid” and “solid” are the names of the particle arrays.
-
compute_time_step
(dt, cfl)[source]¶ If there are any adaptive timestep constraints, the appropriate timestep is returned, else None is returned.
-
initial_acceleration
(**kwargs)¶ Compute the initial accelerations if needed before the iterations start.
The default implementation only does this for the first acceleration evaluator. So if you have multiple evaluators, you must override this method in a subclass.
-
one_timestep
(t, dt)[source]¶ User written function that actually does one timestep.
This function is used in the high-performance Cython implementation. The assumptions one may make are the following:
t and dt are passed.
the following methods are available:
- self.initialize()
- self.stage1(), self.stage2() etc. depending on the number of stages available.
- self.compute_accelerations(index=0, update_nnps=True)
- self.do_post_stage(stage_dt, stage_count_from_1)
- self.update_domain()
Please see any of the concrete implementations of the Integrator class to study. By default the Integrator implements a predict-evaluate-correct method, the same as PECIntegrator.
-
set_acceleration_evals
(a_evals)[source]¶ Set the acceleration evaluators.
This must be done before the integrator is used.
If you are using the SPHCompiler, it automatically calls this method.
-
set_compiled_object
(c_integrator)[source]¶ Set the high-performance compiled object to call internally.
-
set_post_stage_callback
(callback)[source]¶ This callback is called when the particles are moved, i.e one stage of the integration is done.
This callback is passed the current time value, the timestep and the stage.
The current time value is t + stage_dt, for example this would be 0.5*dt for a two stage predictor corrector integrator.
-
step
(time, dt)[source]¶ This function is called by the solver.
To implement the integration step please override the
one_timestep
method.
-
update_domain
(**kwargs)¶ Update the domain of the simulation.
This is to be called when particles move so the ghost particles (periodicity, mirror boundary conditions) can be reset. Further, this also recalculates the appropriate cell size based on the particle kernel radius, h. This should be called explicitly when desired but usually this is done when the particles are moved or the h is changed.
The integrator should explicitly call this when needed in the one_timestep method.
-
-
class
pysph.sph.integrator.
LeapFrogIntegrator
(**kw)[source]¶ Bases:
pysph.sph.integrator.PECIntegrator
A leap-frog integrator.
Pass fluid names and suitable IntegratorStep instances.
For example:
>>> integrator = Integrator(fluid=WCSPHStep(), solid=WCSPHStep())
where “fluid” and “solid” are the names of the particle arrays.
-
one_timestep
(t, dt)[source]¶ User written function that actually does one timestep.
This function is used in the high-performance Cython implementation. The assumptions one may make are the following:
t and dt are passed.
the following methods are available:
- self.initialize()
- self.stage1(), self.stage2() etc. depending on the number of stages available.
- self.compute_accelerations(index=0, update_nnps=True)
- self.do_post_stage(stage_dt, stage_count_from_1)
- self.update_domain()
Please see any of the concrete implementations of the Integrator class to study. By default the Integrator implements a predict-evaluate-correct method, the same as PECIntegrator.
-
-
class
pysph.sph.integrator.
PECIntegrator
(**kw)[source]¶ Bases:
pysph.sph.integrator.Integrator
In the Predict-Evaluate-Correct (PEC) mode, the system is advanced using:
\[ \begin{align}\begin{aligned}y^{n+\frac{1}{2}} = y^n + \frac{\Delta t}{2}F(y^{n-\frac{1}{2}}) --> Predict\\F(y^{n+\frac{1}{2}}) --> Evaluate\\y^{n + 1} = y^n + \Delta t F(y^{n+\frac{1}{2}})\end{aligned}\end{align} \]Pass fluid names and suitable IntegratorStep instances.
For example:
>>> integrator = Integrator(fluid=WCSPHStep(), solid=WCSPHStep())
where “fluid” and “solid” are the names of the particle arrays.
-
one_timestep
(t, dt)[source]¶ User written function that actually does one timestep.
This function is used in the high-performance Cython implementation. The assumptions one may make are the following:
t and dt are passed.
the following methods are available:
- self.initialize()
- self.stage1(), self.stage2() etc. depending on the number of stages available.
- self.compute_accelerations(index=0, update_nnps=True)
- self.do_post_stage(stage_dt, stage_count_from_1)
- self.update_domain()
Please see any of the concrete implementations of the Integrator class to study. By default the Integrator implements a predict-evaluate-correct method, the same as PECIntegrator.
-
-
class
pysph.sph.integrator.
PEFRLIntegrator
(**kw)[source]¶ Bases:
pysph.sph.integrator.Integrator
A Position-Extended Forest-Ruth-Like integrator [Omeylan2002]
References
[Omeylan2002] I.M. Omelyan, I.M. Mryglod and R. Folk, “Optimized Forest-Ruth- and Suzuki-like algorithms for integration of motion in many-body systems”, Computer Physics Communications 146, 188 (2002) http://arxiv.org/abs/cond-mat/0110585 Pass fluid names and suitable IntegratorStep instances.
For example:
>>> integrator = Integrator(fluid=WCSPHStep(), solid=WCSPHStep())
where “fluid” and “solid” are the names of the particle arrays.
-
one_timestep
(t, dt)[source]¶ User written function that actually does one timestep.
This function is used in the high-performance Cython implementation. The assumptions one may make are the following:
t and dt are passed.
the following methods are available:
- self.initialize()
- self.stage1(), self.stage2() etc. depending on the number of stages available.
- self.compute_accelerations(index=0, update_nnps=True)
- self.do_post_stage(stage_dt, stage_count_from_1)
- self.update_domain()
Please see any of the concrete implementations of the Integrator class to study. By default the Integrator implements a predict-evaluate-correct method, the same as PECIntegrator.
-
-
class
pysph.sph.integrator.
TVDRK3Integrator
(**kw)[source]¶ Bases:
pysph.sph.integrator.Integrator
In the TVD-RK3 integrator, the system is advanced using:
\[ \begin{align}\begin{aligned}y^{n + \frac{1}{3}} = y^n + \Delta t F( y^n )\\y^{n + \frac{2}{3}} = \frac{3}{4}y^n + \frac{1}{4}(y^{n + \frac{1}{3}} + \Delta t F(y^{n + \frac{1}{3}}))\\y^{n + 1} = \frac{1}{3}y^n + \frac{2}{3}(y^{n + \frac{2}{3}} + \Delta t F(y^{n + \frac{2}{3}}))\end{aligned}\end{align} \]Pass fluid names and suitable IntegratorStep instances.
For example:
>>> integrator = Integrator(fluid=WCSPHStep(), solid=WCSPHStep())
where “fluid” and “solid” are the names of the particle arrays.
-
one_timestep
(t, dt)[source]¶ User written function that actually does one timestep.
This function is used in the high-performance Cython implementation. The assumptions one may make are the following:
t and dt are passed.
the following methods are available:
- self.initialize()
- self.stage1(), self.stage2() etc. depending on the number of stages available.
- self.compute_accelerations(index=0, update_nnps=True)
- self.do_post_stage(stage_dt, stage_count_from_1)
- self.update_domain()
Please see any of the concrete implementations of the Integrator class to study. By default the Integrator implements a predict-evaluate-correct method, the same as PECIntegrator.
-
Integrator steps for different schemes.
Implement as many stages as needed.
-
class
pysph.sph.integrator_step.
ADKEStep
[source]¶ Bases:
pysph.sph.integrator_step.IntegratorStep
Predictor Corrector integrator for Gas-dynamics ADKE
-
initialize
(d_idx, d_x0, d_y0, d_z0, d_x, d_y, d_z, d_u0, d_v0, d_w0, d_u, d_v, d_w, d_e, d_e0, d_rho, d_rho0)[source]¶
-
-
class
pysph.sph.integrator_step.
AdamiVerletStep
[source]¶ Bases:
pysph.sph.integrator_step.IntegratorStep
Verlet time integration described in A generalized wall boundary condition for smoothed particle hydrodynamics 2012, JCP, 231, pp 7057–7075
This integrator can operate in either PEC mode or in EPEC mode as described in the paper.
-
class
pysph.sph.integrator_step.
EulerStep
[source]¶ Bases:
pysph.sph.integrator_step.IntegratorStep
Fast but inaccurate integrator. Use this for testing
-
class
pysph.sph.integrator_step.
GasDFluidStep
[source]¶ Bases:
pysph.sph.integrator_step.IntegratorStep
Predictor Corrector integrator for Gas-dynamics
-
initialize
(d_idx, d_x0, d_y0, d_z0, d_x, d_y, d_z, d_h, d_u0, d_v0, d_w0, d_u, d_v, d_w, d_e, d_e0, d_h0, d_converged, d_omega, d_rho, d_rho0, d_alpha1, d_alpha2, d_alpha10, d_alpha20)[source]¶
-
-
class
pysph.sph.integrator_step.
InletOutletStep
[source]¶ Bases:
pysph.sph.integrator_step.IntegratorStep
A trivial integrator for the inlet/outlet particles
-
class
pysph.sph.integrator_step.
IntegratorStep
[source]¶ Bases:
object
Subclass this and implement the methods
initialize
,stage1
etc. Use the same conventions as the equations.
-
class
pysph.sph.integrator_step.
LeapFrogStep
[source]¶ Bases:
pysph.sph.integrator_step.IntegratorStep
Using this stepper with XSPH as implemented in pysph.base.basic_equations.XSPHCorrection is not directly possible and requires a nicer implementation where the correction alone is added to
ax, ay, az
.
-
class
pysph.sph.integrator_step.
OneStageRigidBodyStep
[source]¶ Bases:
pysph.sph.integrator_step.IntegratorStep
Simple one stage rigid-body motion
-
class
pysph.sph.integrator_step.
PEFRLStep
[source]¶ Bases:
pysph.sph.integrator_step.IntegratorStep
Using this stepper with XSPH as implemented in pysph.base.basic_equations.XSPHCorrection is not directly possible and requires a nicer implementation where the correction alone is added to
ax, ay, az
.-
stage2
(d_idx, d_x, d_y, d_z, d_u, d_au, d_v, d_av, d_w, d_aw, d_ax, d_ay, d_az, d_rho, d_arho, d_e, d_ae, dt=0.0)[source]¶
-
stage3
(d_idx, d_x, d_y, d_z, d_u, d_au, d_v, d_av, d_w, d_aw, d_ax, d_ay, d_az, d_rho, d_arho, d_e, d_ae, dt=0.0)[source]¶
-
-
class
pysph.sph.integrator_step.
SolidMechStep
[source]¶ Bases:
pysph.sph.integrator_step.IntegratorStep
Predictor corrector Integrator for solid mechanics problems
-
initialize
(d_idx, d_x0, d_y0, d_z0, d_x, d_y, d_z, d_u0, d_v0, d_w0, d_u, d_v, d_w, d_rho0, d_rho, d_s00, d_s01, d_s02, d_s11, d_s12, d_s22, d_s000, d_s010, d_s020, d_s110, d_s120, d_s220, d_e0, d_e)[source]¶
-
stage1
(d_idx, d_x0, d_y0, d_z0, d_x, d_y, d_z, d_u0, d_v0, d_w0, d_u, d_v, d_w, d_rho0, d_rho, d_au, d_av, d_aw, d_ax, d_ay, d_az, d_arho, d_e, d_e0, d_ae, d_s00, d_s01, d_s02, d_s11, d_s12, d_s22, d_s000, d_s010, d_s020, d_s110, d_s120, d_s220, d_as00, d_as01, d_as02, d_as11, d_as12, d_as22, dt)[source]¶
-
stage2
(d_idx, d_x0, d_y0, d_z0, d_x, d_y, d_z, d_u0, d_v0, d_w0, d_u, d_v, d_w, d_rho0, d_rho, d_au, d_av, d_aw, d_ax, d_ay, d_az, d_arho, d_e, d_ae, d_e0, d_s00, d_s01, d_s02, d_s11, d_s12, d_s22, d_s000, d_s010, d_s020, d_s110, d_s120, d_s220, d_as00, d_as01, d_as02, d_as11, d_as12, d_as22, dt)[source]¶
-
-
class
pysph.sph.integrator_step.
TransportVelocityStep
[source]¶ Bases:
pysph.sph.integrator_step.IntegratorStep
Integrator defined in ‘A transport velocity formulation for smoothed particle hydrodynamics’, 2013, JCP, 241, pp 292–307
For a predictor-corrector style of integrator, this integrator should operate only in PEC mode.
-
class
pysph.sph.integrator_step.
TwoStageRigidBodyStep
[source]¶ Bases:
pysph.sph.integrator_step.IntegratorStep
Simple rigid-body motion
At each stage of the integrator, the prescribed velocity and accelerations are incremented by dt/2.
Note that the time centered velocity is used for updating the particle positions. This ensures exact motion for a constant acceleration.
-
class
pysph.sph.integrator_step.
VelocityVerletSymplecticWCSPHStep
[source]¶ Bases:
pysph.sph.integrator_step.IntegratorStep
Another symplectic second order integrator described in the review paper by Monaghan:
J. Monaghan, “Smoothed Particle Hydrodynamics”, Reports on Progress in Physics, 2005, 68, pp 1703–1759 [JM05]
kick–drift–kick form of the verlet integrator
-
class
pysph.sph.integrator_step.
VerletSymplecticWCSPHStep
[source]¶ Bases:
pysph.sph.integrator_step.IntegratorStep
Symplectic second order integrator described in the review paper by Monaghan:
J. Monaghan, “Smoothed Particle Hydrodynamics”, Reports on Progress in Physics, 2005, 68, pp 1703–1759 [JM05]
Notes:
This integrator should run in PEC mode since in the first stage, the positions are updated using the current velocity. The accelerations are then computed to advance to the full time step values.
This version of the integrator does not update the density. That is, the summation density is used instead of the continuity equation.
-
class
pysph.sph.integrator_step.
WCSPHStep
[source]¶ Bases:
pysph.sph.integrator_step.IntegratorStep
Standard Predictor Corrector integrator for the WCSPH formulation
Use this integrator for WCSPH formulations. In the predictor step, the particles are advanced to t + dt/2. The particles are then advanced with the new force computed at this position.
This integrator can be used in PEC or EPEC mode.
The same integrator can be used for other problems. Like for example solid mechanics (see SolidMechStep)
-
initialize
(d_idx, d_x0, d_y0, d_z0, d_x, d_y, d_z, d_u0, d_v0, d_w0, d_u, d_v, d_w, d_rho0, d_rho)[source]¶
-
-
class
pysph.sph.integrator_step.
WCSPHTVDRK3Step
[source]¶ Bases:
pysph.sph.integrator_step.IntegratorStep
TVD RK3 stepper for WCSPH
This integrator requires \(2\) stages for the storage of the acceleration variables.
-
initialize
(d_idx, d_x0, d_y0, d_z0, d_x, d_y, d_z, d_u0, d_v0, d_w0, d_u, d_v, d_w, d_rho0, d_rho)[source]¶
-
stage1
(d_idx, d_x0, d_y0, d_z0, d_x, d_y, d_z, d_u0, d_v0, d_w0, d_u, d_v, d_w, d_rho0, d_rho, d_au, d_av, d_aw, d_ax, d_ay, d_az, d_arho, dt)[source]¶
-
-
class
pysph.sph.gas_dynamics.magma2.
TVDRK2Integrator
(**kw)[source]¶ Bases:
pysph.sph.integrator.Integrator
Total variation diminishing (TVD) second-order Runge–Kutta (RK2) integrator. Prescribed equations in [Rosswog2020b] are,
\[ \begin{align}\begin{aligned}y^{*} = y^n + \Delta t f(y^{n}) --> Predict\\y^{n+1} = 0.5 (y^n + y^{*} + \Delta t f(y^{*})) --> Correct\end{aligned}\end{align} \]This is not suitable to be used with periodic boundaries. Say, if a particle crosses the left boundary at the prediction step, update_domain() will introduce that particle at the right boundary. Afterwards, the correction step essentially averages the positions and the particle ends up near the mid-point. To do away with this issue, the equation for the correction step is changed to,
\[y^{n+1} = y^{*} + 0.5 * \Delta t (f(y^{*}) - f(y^{n}))\]Pass fluid names and suitable IntegratorStep instances.
For example:
>>> integrator = Integrator(fluid=WCSPHStep(), solid=WCSPHStep())
where “fluid” and “solid” are the names of the particle arrays.
-
class
pysph.sph.gas_dynamics.magma2.
TVDRK2IntegratorWithRecycling
(**kw)[source]¶ Bases:
pysph.sph.integrator.Integrator
Total variation diminishing (TVD) second-order Runge–Kutta (RK2) integrator with recycling of derivatives. The system is advanced using:
\[ \begin{align}\begin{aligned}y^{*,n} = y^n + \Delta t f(y^{*,n-1})\\y^{n+1} = 0.5 (y^n + y^{*} + \Delta t f(y^{*,n}))\end{aligned}\end{align} \]This is not suitable to be used with periodic boundaries. Say, if a particle crosses the left boundary at the prediction step, update_domain() will introduce that particle at the right boundary. Afterwards, the correction step essentially averages the positions and the particle ends up near the mid-point. To do away with this issue, the equation for correction step is changed to,
\[y^{n+1} = y^{*} + 0.5 * \Delta t (f(y^{*,n}) - f(y^{*,n-1}))\]Pass fluid names and suitable IntegratorStep instances.
For example:
>>> integrator = Integrator(fluid=WCSPHStep(), solid=WCSPHStep())
where “fluid” and “solid” are the names of the particle arrays.
-
class
pysph.sph.gas_dynamics.magma2.
TVDRK2Step
[source]¶ Bases:
pysph.sph.integrator_step.IntegratorStep
Total variation diminishing (TVD) second-order Runge–Kutta (RK2) integrator step.
SPH Kernels¶
Definition of some SPH kernel functions
-
class
pysph.base.kernels.
CubicSpline
(dim=1)[source]¶ Bases:
object
Cubic Spline Kernel: [Monaghan1992]
\[\begin{split}W(q) = \ &\sigma_3\left[ 1 - \frac{3}{2}q^2\left( 1 - \frac{q}{2} \right) \right], \ & \textrm{for} \ 0 \leq q \leq 1,\\ = \ &\frac{\sigma_3}{4}(2-q)^3, & \textrm{for} \ 1 < q \leq 2,\\ = \ &0, & \textrm{for}\ q>2, \\\end{split}\]where \(\sigma_3\) is a dimensional normalizing factor for the cubic spline function given by:
\[\begin{split}\sigma_3 = \ & \frac{2}{3h^1}, & \textrm{for dim=1}, \\ \sigma_3 = \ & \frac{10}{7\pi h^2}, \ & \textrm{for dim=2}, \\ \sigma_3 = \ & \frac{1}{\pi h^3}, & \textrm{for dim=3}. \\\end{split}\]References
[Monaghan1992] (1, 2) J. Monaghan, Smoothed Particle Hydrodynamics, “Annual Review of Astronomy and Astrophysics”, 30 (1992), pp. 543-574.
-
class
pysph.base.kernels.
Gaussian
(dim=2)[source]¶ Bases:
object
Gaussian Kernel: [Liu2010]
\[\begin{split}W(q) = \ &\sigma_g e^{-q^2}, \ & \textrm{for} \ 0\leq q \leq 3,\\ = \ & 0, & \textrm{for} \ q>3,\\\end{split}\]where \(\sigma_g\) is a dimensional normalizing factor for the gaussian function given by:
\[\begin{split}\sigma_g = \ & \frac{1}{\pi^{1/2} h}, \ & \textrm{for dim=1}, \\ \sigma_g = \ & \frac{1}{\pi h^2}, \ & \textrm{for dim=2}, \\ \sigma_g = \ & \frac{1}{\pi^{3/2} h^3}, & \textrm{for dim=3}. \\\end{split}\]References
[Liu2010] (1, 2) M. Liu, & G. Liu, Smoothed particle hydrodynamics (SPH): an overview and recent developments, “Archives of computational methods in engineering”, 17.1 (2010), pp. 25-76.
-
class
pysph.base.kernels.
QuinticSpline
(dim=2)[source]¶ Bases:
object
Quintic Spline SPH kernel: [Liu2010]
\[\begin{split}W(q) = \ &\sigma_5\left[ (3-q)^5 - 6(2-q)^5 + 15(1-q)^5 \right], \ & \textrm{for} \ 0\leq q \leq 1,\\ = \ &\sigma_5\left[ (3-q)^5 - 6(2-q)^5 \right], & \textrm{for} \ 1 < q \leq 2,\\ = \ &\sigma_5 \ (3-q)^5 , & \textrm{for} \ 2 < q \leq 3,\\ = \ & 0, & \textrm{for} \ q>3,\\\end{split}\]where \(\sigma_5\) is a dimensional normalizing factor for the quintic spline function given by:
\[\begin{split}\sigma_5 = \ & \frac{1}{120 h^1}, & \textrm{for dim=1}, \\ \sigma_5 = \ & \frac{7}{478\pi h^2}, \ & \textrm{for dim=2}, \\ \sigma_5 = \ & \frac{1}{120\pi h^3}, & \textrm{for dim=3}. \\\end{split}\]
-
class
pysph.base.kernels.
SuperGaussian
(dim=2)[source]¶ Bases:
object
Super Gaussian Kernel: [Monaghan1992]
\[\begin{split}W(q) = \ &\frac{1}{h^{d}\pi^{d/2}} e^{-q^2} (d/2 + 1 - q^2), \ & \textrm{for} \ 0\leq q \leq 3,\\ = \ & 0, & \textrm{for} \ q>3,\\\end{split}\]where \(d\) is the number of dimensions.
-
class
pysph.base.kernels.
WendlandQuintic
(dim=2)[source]¶ Bases:
object
The following is the WendlandQuintic kernel(C2) kernel for 2D and 3D.
\[\begin{split}W(q) = \ & \alpha_d (1-q/2)^4(2q +1))), \ & \textrm{for} \ 0\leq q \leq 2,\\ = \ & 0, & \textrm{for} \ q>2,\\\end{split}\]where \(d\) is the number of dimensions and
\[\begin{split}\alpha_d = \ & \frac{7}{4\pi h^2}, \ & \textrm{for dim=2}, \\ \alpha_d = \ & \frac{21}{16\pi h^3}, \ & \textrm{for dim=3}\end{split}\]
-
class
pysph.base.kernels.
WendlandQuinticC2_1D
(dim=1)[source]¶ Bases:
object
The following is the WendlandQuintic kernel (Wendland C2) kernel for 1D.
\[\begin{split}W(q) = \ & \alpha_d (1-q/2)^3 (1.5q +1))), \ & \textrm{for} \ 0\leq q \leq 2,\\ = \ & 0, & \textrm{for} \ q>2,\\\end{split}\]where \(d\) is the number of dimensions and
\[\alpha_d = \frac{5}{8h}, \textrm{for dim=1}\]
-
class
pysph.base.kernels.
WendlandQuinticC4
(dim=2)[source]¶ Bases:
object
- The following is the WendlandQuintic kernel (Wendland C4) kernel for
- 2D and 3D.
\[\begin{split}W(q) = \ & \alpha_d (1-q/2)^6(\frac{35}{12} q^2 + 3q +1))), \ & \textrm{for} \ 0\leq q \leq 2,\\ = \ & 0, & \textrm{for} \ q>2,\\\end{split}\]where \(d\) is the number of dimensions and
\[\begin{split}\alpha_d = \ & \frac{9}{4\pi h^2}, \ & \textrm{for dim=2}, \\ \alpha_d = \ & \frac{495}{256\pi h^3}, \ & \textrm{for dim=3}\end{split}\]
-
class
pysph.base.kernels.
WendlandQuinticC4_1D
(dim=1)[source]¶ Bases:
object
The following is the WendlandQuintic kernel (Wendland C4) kernel for 1D.
\[\begin{split}W(q) = \ & \alpha_d (1-q/2)^5 (2q^2 + 2.5q +1))), \ & \textrm{for} \ 0\leq q \leq 2,\\ = \ & 0, & \textrm{for} \ q>2,\\\end{split}\]where \(d\) is the number of dimensions and
\[\alpha_d = \frac{3}{4h}, \ \textrm{for dim=1}\]
-
class
pysph.base.kernels.
WendlandQuinticC6
(dim=2)[source]¶ Bases:
object
The following is the WendlandQuintic kernel(C6) kernel for 2D and 3D.
\[\begin{split}W(q) = \ & \alpha_d (1-q/2)^8 (4 q^3 + 6.25 q^2 + 4q +1))), \ & \textrm{for} \ 0\leq q \leq 2,\\ = \ & 0, & \textrm{for} \ q>2,\\\end{split}\]where \(d\) is the number of dimensions and
\[\begin{split}\alpha_d = \ & \frac{78}{28\pi h^2}, \ & \textrm{for dim=2}, \\ \alpha_d = \ & \frac{1365}{512\pi h^3}, \ & \textrm{for dim=3}\end{split}\]
-
class
pysph.base.kernels.
WendlandQuinticC6_1D
(dim=1)[source]¶ Bases:
object
The following is the WendlandQuintic kernel (Wendland C6) kernel for 1D.
\[\begin{split}W(q) = \ & \alpha_d (1-q/2)^7 (\frac{21}{8} q^3 + \frac{19}{4} q^2 + 3.5q +1))), \ & \textrm{for} \ 0\leq q \leq 2,\\ = \ & 0, & \textrm{for} \ q>2,\\\end{split}\]where \(d\) is the number of dimensions and
\[\alpha_d = \ \frac{55}{64h}, \textrm{for dim=1}\]
Module nnps: Nearest Neighbor Particle Search¶
-
class
pysph.base.nnps_base.
CPUDomainManager
(double xmin=-1000, double xmax=1000, double ymin=0, double ymax=0, double zmin=0, double zmax=0, periodic_in_x=False, periodic_in_y=False, periodic_in_z=False, double n_layers=2.0, backend=None, props=None, mirror_in_x=False, mirror_in_y=False, mirror_in_z=False)¶ Bases:
pysph.base.nnps_base.DomainManagerBase
This class determines the limits of the solution domain.
We expect all simulations to have well defined domain limits beyond which we are either not interested or the solution is invalid to begin with. Thus, if a particle leaves the domain, the solution should be considered invalid (at least locally).
The initial domain limits could be given explicitly or asked to be computed from the particle arrays. The domain could be periodic.
Constructor
The n_layers argument specifies the number of ghost layers as multiples of hmax*radius_scale.
- props: list/dict: properties to copy.
- Provide a list or dict with the keys as particle array names. Only the specified properties are copied. If not specified, all props are copied.
-
dtype
¶ object
Type: dtype
-
dtype_max
¶ ‘double’
Type: dtype_max
-
ghosts
¶ list
Type: ghosts
-
update
(self)¶ General method that is called before NNPS can bin particles.
This method is responsible for the computation of cell sizes and creation of any ghost particles for periodic or wall boundary conditions.
-
use_double
¶ ‘bool’
Type: use_double
-
class
pysph.base.nnps_base.
Cell
(IntPoint cid, double cell_size, int narrays, int layers=2)¶ Bases:
object
Basic indexing structure for the box-sort NNPS.
For a spatial indexing based on the box-sort algorithm, this class defines the spatial data structure used to hold particle indices (local and global) that are within this cell.
Constructor
Parameters: - cid (IntPoint) – Spatial index (unflattened) for the cell
- cell_size (double) – Spatial extent of the cell in each dimension
- narrays (int) – Number of arrays being binned
- layers (int) – Factor to compute the bounding box
-
centroid
¶ ‘cPoint’
Type: centroid
-
get_bounding_box
(self, Point boxmin, Point boxmax, int layers=1, cell_size=None)¶ Compute the bounding box for the cell.
Parameters: - boxmin (Point (output)) – The bounding box min coordinates are stored here
- boxmax (Point (output)) – The bounding box max coordinates are stored here
- layers (int (input) default (1)) – Number of offset layers to define the bounding box
- cell_size (double (input) default (None)) – Optional cell size to use to compute the bounding box. If not provided, the cell’s size will be used.
-
get_centroid
(self, Point pnt)¶ Utility function to get the centroid of the cell.
Parameters: - pnt (Point (input/output)) – The centroid is cmoputed and stored in this object.
- centroid is defined as the origin plus half the cell size (The) –
- each dimension. (in) –
-
gindices
¶ list
Type: gindices
-
is_boundary
¶ ‘bool’
Type: is_boundary
-
lindices
¶ list
Type: lindices
-
set_indices
(self, int index, UIntArray lindices, UIntArray gindices)¶ Set the global and local indices for the cell
-
size
¶ ‘double’
Type: size
-
class
pysph.base.nnps_base.
DomainManager
(double xmin=-1000, double xmax=1000, double ymin=0, double ymax=0, double zmin=0, double zmax=0, periodic_in_x=False, periodic_in_y=False, periodic_in_z=False, double n_layers=2.0, backend=None, props=None, mirror_in_x=False, mirror_in_y=False, mirror_in_z=False)¶ Bases:
object
Constructor
Parameters: - xmax, ymin, ymax, zmin, zmax (xmin,) –
- periodic_in_y, periodic_in_z (periodic_in_x,) –
- mirror_in_y, mirror_in_z (mirror_in_x,) –
- n_layers (double: number of ghost layers as a multiple of) – h_max*radius_scale
- props (list/dict: properties to copy.) – Provide a list or dict with the keys as particle array names. Only the specified properties are copied. If not specified, all props are copied.
-
backend
¶ object
Type: backend
-
compute_cell_size_for_binning
(self)¶ Compute the cell size for the binning.
The cell size is chosen as the kernel radius scale times the maximum smoothing length in the local processor. For parallel runs, we would need to communicate the maximum ‘h’ on all processors to decide on the appropriate binning size.
-
manager
¶ object
Type: manager
-
set_cell_size
(self, cell_size)¶
-
set_in_parallel
(self, in_parallel)¶
-
set_pa_wrappers
(self, wrappers)¶
-
set_radius_scale
(self, radius_scale)¶
-
update
(self)¶
-
class
pysph.base.nnps_base.
DomainManagerBase
(double xmin=-1000, double xmax=1000, double ymin=0, double ymax=0, double zmin=0, double zmax=0, periodic_in_x=False, periodic_in_y=False, periodic_in_z=False, double n_layers=2.0, props=None, mirror_in_x=False, mirror_in_y=False, mirror_in_z=False)¶ Bases:
object
Constructor
The n_layers argument specifies the number of ghost layers as multiples of hmax*radius_scale.
- props: list/dict: properties to copy.
- Provide a list or dict with the keys as particle array names. Only the specified properties are copied. If not specified, all props are copied.
-
cell_size
¶ ‘double’
Type: cell_size
-
compute_cell_size_for_binning
(self)¶
-
copy_props
¶ list
Type: copy_props
-
dim
¶ ‘int’
Type: dim
-
hmin
¶ ‘double’
Type: hmin
-
in_parallel
¶ ‘bool’
Type: in_parallel
-
is_mirror
¶ ‘bool’
Type: is_mirror
-
is_periodic
¶ ‘bool’
Type: is_periodic
-
mirror_in_x
¶ ‘bool’
Type: mirror_in_x
-
mirror_in_y
¶ ‘bool’
Type: mirror_in_y
-
mirror_in_z
¶ ‘bool’
Type: mirror_in_z
-
n_layers
¶ ‘double’
Type: n_layers
-
narrays
¶ ‘int’
Type: narrays
-
pa_wrappers
¶ list
Type: pa_wrappers
-
periodic_in_x
¶ ‘bool’
Type: periodic_in_x
-
periodic_in_y
¶ ‘bool’
Type: periodic_in_y
-
periodic_in_z
¶ ‘bool’
Type: periodic_in_z
-
props
¶ object
Type: props
-
radius_scale
¶ ‘double’
Type: radius_scale
-
set_cell_size
(self, cell_size)¶
-
set_in_parallel
(self, bool in_parallel)¶
-
set_pa_wrappers
(self, wrappers)¶
-
set_radius_scale
(self, double radius_scale)¶
-
xmax
¶ ‘double’
Type: xmax
-
xmin
¶ ‘double’
Type: xmin
-
xtranslate
¶ ‘double’
Type: xtranslate
-
ymax
¶ ‘double’
Type: ymax
-
ymin
¶ ‘double’
Type: ymin
-
ytranslate
¶ ‘double’
Type: ytranslate
-
zmax
¶ ‘double’
Type: zmax
-
zmin
¶ ‘double’
Type: zmin
-
ztranslate
¶ ‘double’
Type: ztranslate
-
class
pysph.base.nnps_base.
NNPS
(int dim, list particles, double radius_scale=2.0, int ghost_layers=1, domain=None, bool cache=False, bool sort_gids=False)¶ Bases:
pysph.base.nnps_base.NNPSBase
Nearest neighbor query class using the box-sort algorithm.
NNPS bins all local particles using the box sort algorithm in Cells. The cells are stored in a dictionary ‘cells’ which is keyed on the spatial index (IntPoint) of the cell.
-
current_cache
¶ pysph.base.nnps_base.NeighborCache
Type: current_cache
-
get_spatially_ordered_indices
(self, int pa_index, LongArray indices)¶
-
set_in_parallel
(self, bool in_parallel)¶
-
set_use_cache
(self, bool use_cache)¶
-
sort_gids
¶ ‘bool’
Type: sort_gids
-
spatially_order_particles
(self, int pa_index)¶ Spatially order particles such that nearby particles have indices nearer each other. This may improve pre-fetching on the CPU.
-
update
(self)¶ Update the local data after particles have moved.
For parallel runs, we want the NNPS to be independent of the ParallelManager which is solely responsible for distributing particles across available processors. We assume therefore that after a parallel update, each processor has all the local particle information it needs and this operation is carried out locally.
For serial runs, this method should be called when the particles have moved.
-
update_domain
(self)¶
-
xmax
¶ cyarray.carray.DoubleArray
Type: xmax
-
xmin
¶ cyarray.carray.DoubleArray
Type: xmin
-
-
class
pysph.base.nnps_base.
NNPSBase
(int dim, list particles, double radius_scale=2.0, int ghost_layers=1, domain=None, bool cache=False, bool sort_gids=False)¶ Bases:
object
Constructor for NNPS
Parameters: - dim (int) – Dimension (fixme: Not sure if this is really needed)
- particles (list) – The list of particle arrays we are working on.
- radius_scale (double, default (2)) – Optional kernel radius scale. Defaults to 2
- domain (DomainManager, default (None)) – Optional limits for the domain
- cache (bint) – Flag to set if we want to cache neighbor calls. This costs storage but speeds up neighbor calculations.
- sort_gids (bint, default (False)) – Flag to sort neighbors using gids (if they are available). This is useful when comparing parallel results with those from a serial run.
-
brute_force_neighbors
(self, int src_index, int dst_index, size_t d_idx, UIntArray nbrs)¶
-
cache
¶ list
Type: cache
-
cell_size
¶ ‘double’
Type: cell_size
-
dim
¶ ‘int’
Type: dim
-
domain
¶ pysph.base.nnps_base.DomainManager
Type: domain
-
get_nearest_particles
(self, int src_index, int dst_index, size_t d_idx, UIntArray nbrs)¶
-
get_nearest_particles_no_cache
(self, int src_index, int dst_index, size_t d_idx, UIntArray nbrs, bool prealloc)¶ Find nearest neighbors for particle id ‘d_idx’ without cache
Parameters: - src_index (int) – Index in the list of particle arrays to which the neighbors belong
- dst_index (int) – Index in the list of particle arrays to which the query point belongs
- d_idx (size_t) – Index of the query point in the destination particle array
- nbrs (UIntArray) – Array to be populated by nearest neighbors of ‘d_idx’
-
hmin
¶ ‘double’
Type: hmin
-
is_periodic
¶ ‘bool’
Type: is_periodic
-
n_cells
¶ ‘int’
Type: n_cells
-
narrays
¶ ‘int’
Type: narrays
-
pa_wrappers
¶ list
Type: pa_wrappers
-
particles
¶ list
Type: particles
-
radius_scale
¶ ‘double’
Type: radius_scale
-
set_context
(self, int src_index, int dst_index)¶ Setup the context before asking for neighbors. The dst_index represents the particles for whom the neighbors are to be determined from the particle array with index src_index.
Parameters: - src_index (int: the source index of the particle array.) –
- dst_index (int: the destination index of the particle array.) –
-
class
pysph.base.nnps_base.
NNPSParticleArrayWrapper
(ParticleArray pa)¶ Bases:
object
-
gid
¶ cyarray.carray.UIntArray
Type: gid
-
h
¶ cyarray.carray.DoubleArray
Type: h
-
name
¶ unicode
Type: name
-
np
¶ ‘int’
Type: np
-
pa
¶ pysph.base.particle_array.ParticleArray
Type: pa
-
remove_tagged_particles
(self, int tag)¶
-
tag
¶ cyarray.carray.IntArray
Type: tag
-
x
¶ cyarray.carray.DoubleArray
Type: x
-
y
¶ cyarray.carray.DoubleArray
Type: y
-
z
¶ cyarray.carray.DoubleArray
Type: z
-
-
class
pysph.base.nnps_base.
NeighborCache
(NNPS nnps, int dst_index, int src_index)¶ Bases:
object
-
find_all_neighbors
(self)¶
-
get_neighbors
(self, int src_index, size_t d_idx, UIntArray nbrs)¶
-
update
(self)¶
-
-
pysph.base.nnps_base.
arange_uint
(int start, int stop=-1) → UIntArray¶ Utility function to return a numpy.arange for a UIntArray
-
pysph.base.nnps_base.
get_centroid
(double cell_size, IntPoint cid)¶ Get the centroid of the cell.
Parameters: - cell_size (double (input)) – Cell size used for binning
- cid (IntPoint (input)) – Spatial index for a cell
Returns: centroid
Return type: Point
Notes
The centroid in any coordinate direction is defined to be the origin plus half the cell size in that direction
-
pysph.base.nnps_base.
get_number_of_threads
() → int¶
-
pysph.base.nnps_base.
py_flatten
(IntPoint cid, IntArray ncells_per_dim, int dim)¶ Python wrapper
-
pysph.base.nnps_base.
py_get_valid_cell_index
(IntPoint cid, IntArray ncells_per_dim, int dim, int n_cells)¶ Return the flattened cell index for a valid cell
-
pysph.base.nnps_base.
py_unflatten
(long cell_index, IntArray ncells_per_dim, int dim)¶ Python wrapper
-
pysph.base.nnps_base.
set_number_of_threads
(int n)¶
-
class
pysph.base.linked_list_nnps.
LinkedListNNPS
(int dim, list particles, double radius_scale=2.0, int ghost_layers=1, domain=None, bool fixed_h=False, bool cache=False, bool sort_gids=False)¶ Bases:
pysph.base.nnps_base.NNPS
Nearest neighbor query class using the linked list method.
Constructor for NNPS
Parameters: - dim (int) – Number of dimension.
- particles (list) – The list of particle arrays we are working on
- radius_scale (double, default (2)) – Optional kernel radius scale. Defaults to 2
- ghost_layers (int) – Optional number of layers to share in parallel
- domain (DomainManager, default (None)) – Optional limits for the domain
- fixed_h (bint) – Optional flag to use constant cell sizes throughout.
- cache (bint) – Flag to set if we want to cache neighbor calls. This costs storage but speeds up neighbor calculations.
- sort_gids (bint, default (False)) – Flag to sort neighbors using gids (if they are available). This is useful when comparing parallel results with those from a serial run.
-
fixed_h
¶ ‘bool’
Type: fixed_h
-
get_spatially_ordered_indices
(self, int pa_index, LongArray indices)¶
-
heads
¶ list
Type: heads
-
ncells_per_dim
¶ cyarray.carray.IntArray
Type: ncells_per_dim
-
ncells_tot
¶ ‘int’
Type: ncells_tot
-
nexts
¶ list
Type: nexts
-
set_context
(self, int src_index, int dst_index)¶ Setup the context before asking for neighbors. The dst_index represents the particles for whom the neighbors are to be determined from the particle array with index src_index.
Parameters: - src_index (int: the source index of the particle array.) –
- dst_index (int: the destination index of the particle array.) –
-
class
pysph.base.box_sort_nnps.
BoxSortNNPS
¶ Bases:
pysph.base.linked_list_nnps.LinkedListNNPS
Nearest neighbor query class using the box sort method but which uses the LinkedList algorithm. This makes this very fast but perhaps not as safe as the DictBoxSortNNPS. All this class does is to use a std::map to obtain a linear cell index from the actual flattened cell index.
Constructor for NNPS
Parameters: - dim (int) – Number of dimension.
- particles (list) – The list of particle arrays we are working on
- radius_scale (double, default (2)) – Optional kernel radius scale. Defaults to 2
- ghost_layers (int) – Optional number of layers to share in parallel
- domain (DomainManager, default (None)) – Optional limits for the domain
- fixed_h (bint) – Optional flag to use constant cell sizes throughout.
- cache (bint) – Flag to set if we want to cache neighbor calls. This costs storage but speeds up neighbor calculations.
- sort_gids (bint, default (False)) – Flag to sort neighbors using gids (if they are available). This is useful when comparing parallel results with those from a serial run.
-
cell_to_index
¶ ‘map[long,int]’
Type: cell_to_index
-
class
pysph.base.box_sort_nnps.
DictBoxSortNNPS
(int dim, list particles, double radius_scale=2.0, int ghost_layers=1, domain=None, cache=False, sort_gids=False)¶ Bases:
pysph.base.nnps_base.NNPS
Nearest neighbor query class using the box-sort algorithm using a dictionary.
NNPS bins all local particles using the box sort algorithm in Cells. The cells are stored in a dictionary ‘cells’ which is keyed on the spatial index (IntPoint) of the cell.
Constructor for NNPS
Parameters: - dim (int) – Number of dimensions.
- particles (list) – The list of particle arrays we are working on.
- radius_scale (double, default (2)) – Optional kernel radius scale. Defaults to 2
- domain (DomainManager, default (None)) – Optional limits for the domain
- cache (bint) – Flag to set if we want to cache neighbor calls. This costs storage but speeds up neighbor calculations.
- sort_gids (bint, default (False)) – Flag to sort neighbors using gids (if they are available). This is useful when comparing parallel results with those from a serial run.
-
cells
¶ dict
Type: cells
-
get_nearest_particles_no_cache
(self, int src_index, int dst_index, size_t d_idx, UIntArray nbrs, bool prealloc)¶ Utility function to get near-neighbors for a particle.
Parameters: - src_index (int) – Index of the source particle array in the particles list
- dst_index (int) – Index of the destination particle array in the particles list
- d_idx (int (input)) – Destination particle for which neighbors are sought.
- nbrs (UIntArray (output)) – Neighbors for the requested particle are stored here.
- prealloc (bool) – Specifies if the neighbor array already has pre-allocated space for the neighbor list. In this case the neighbors are directly set in the given array without resetting or appending to the array. This improves performance when the neighbors are cached.
-
class
pysph.base.spatial_hash_nnps.
ExtendedSpatialHashNNPS
(int dim, list particles, double radius_scale=2.0, int H=3, int ghost_layers=1, domain=None, bool fixed_h=False, bool cache=False, bool sort_gids=False, long long table_size=131072, bool approximate=False)¶ Bases:
pysph.base.nnps_base.NNPS
Finds nearest neighbors using Extended Spatial Hashing algorithm
Sub-divides each cell into smaller ones. Useful when particles cluster in a cell.
For approximate Extended Spatial Hash, if the distance between a cell and the cell of the query particle is greater than search radius, the entire cell is ignored.
Ref. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.105.6732&rep=rep1&type=pdf
-
set_context
(self, int src_index, int dst_index)¶ Set context for nearest neighbor searches.
Parameters: - src_index (int) – Index in the list of particle arrays to which the neighbors belong
- dst_index (int) – Index in the list of particle arrays to which the query point belongs
-
-
class
pysph.base.spatial_hash_nnps.
SpatialHashNNPS
(int dim, list particles, double radius_scale=2.0, int ghost_layers=1, domain=None, bool fixed_h=False, bool cache=False, bool sort_gids=False, long long table_size=131072)¶ Bases:
pysph.base.nnps_base.NNPS
Nearest neighbor particle search using Spatial Hashing algorithm
Uses a hashtable to store particles according to cell it belongs to.
Ref. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.105.6732&rep=rep1&type=pdf
-
set_context
(self, int src_index, int dst_index)¶ Set context for nearest neighbor searches.
Parameters: - src_index (int) – Index in the list of particle arrays to which the neighbors belong
- dst_index (int) – Index in the list of particle arrays to which the query point belongs
-
Module parallel_manager¶
Module particle_array¶
The ParticleArray
class itself is documented as below.
A ParticleArray represents a collection of particles.
-
class
pysph.base.particle_array.
ParticleArray
(unicode name=u'', default_particle_tag=Local, constants=None, backend=None, **props)¶ Bases:
object
Class to represent a collection of particles.
-
name
¶ name of this particle array.
Type: str
-
properties
¶ dictionary of {prop_name:carray}.
Type: dict
-
constants
¶ dictionary of {const_name: carray}
Type: dict
Examples
There are many ways to create a ParticleArray:
>>> p = ParticleArray(name='fluid', x=[1.,2., 3., 4.]) >>> p.name 'fluid' >>> p.x, p.tag, p.pid, p.gid
For a full specification of properties with their type etc.:
>>> p = ParticleArray(name='fluid', ... x=dict(data=[1,2], type='int', default=1)) >>> p.get_carray('x').get_c_type() 'int'
The default value is what is used to set the default value when a new particle is added and the arrays auto-resized.
To create constants that are not resized with added/removed particles:
>>> p = ParticleArray(name='f', x=[1,2], constants={'c':[0.,0.,0.]})
Constructor
Parameters: - name (str) – name of this particle array.
- default_particle_tag (int) – one of Local, Remote or Ghost
- constants (dict) – dictionary of constant arrays for the entire particle array. These must be arrays and are not resized when particles are added or removed. These are stored as CArrays internally.
- props – any additional keyword arguments are taken to be properties, one for each property.
-
add_constant
(self, unicode name, data)¶ Add a constant property to the particle array.
A constant property is an array but has a fixed size in that it is never resized as particles are added or removed. These properties are always stored internally as CArrays.
An example of where this is useful is if you need to calculate the center of mass of a solid body or the net force on the body.
Parameters: - name (str) – name of the constant
- data (array-like) – the value for the data.
-
add_output_arrays
(self, list props)¶ Append props to the existing list of output arrays
Parameters: props (list) – The additional list of property arrays to save
-
add_particles
(self, align=True, **particle_props)¶ Add particles in particle_array to self.
Parameters: particle_props (dict) – a dictionary containing numpy arrays for various particle properties. Notes
- all properties should have same length arrays.
- all properties should already be present in this particles array. if new properties are seen, an exception will be raised.
-
add_property
(self, unicode name, unicode type=u'double', default=None, data=None, stride=1)¶ Add a new property to the particle array.
If a default is not supplied 0 is assumed. The stride is useful when many elements are needed per particle. For example if stride is 3 then 3 elements are allocated per particle.
Parameters: - name (str) – compulsory name of property.
- type (str) – specifying the data type of this property (‘double’, ‘int’ etc.)
- default (value) – specifying the default value of this property.
- data (ndarray) – specifying the data associated with each particle.
- stride (int) – the number of elements per particle.
Notes
If there are no particles currently in the particle array, and a new property with some particles is added, all the remaining properties will be resized to the size of the newly added array.
If there are some particles in the particle array, and a new property is added without any particles, then this new property will be resized according to the current size.
If there are some particles in the particle array and a new property is added with a different number of particles, then an error will be raised.
Warning
- it is best not to add properties with data when you already have particles in the particle array. Reason is that, the particles in the particle array will be stored so that the ‘Real’ particles are in the top of the list and later the dummy ones. The data in your property array should be matched to the particles appropriately. This may not always be possible when there are particles of different type in the particle array.
- Properties without any values can be added anytime.
- While initializing particle arrays only using the add_property function, you will have to call align_particles manually to make sure particles are aligned properly.
-
align_particles
(self) → int¶ Moves all ‘Local’ particles to the beginning of the array
This makes retrieving numpy slices of properties of ‘Local’ particles possible. This facility will be required frequently.
Notes
Pseudo-code:
index_arr = LongArray(n) next_insert = 0 for i from 0 to n p <- ith particle if p is Local if i != next_insert tmp = index_arr[next_insert] index_arr[next_insert] = i index_arr[i] = tmp next_insert += 1 else index_arr[i] = i next_insert += 1 else index_arr[i] = i # we now have the new index assignment. # swap the required values as needed. for every property array: for i from 0 to n: if index_arr[i] != i: tmp = prop[i] prop[i] = prop[index_arr[i]] prop[index_arr[i]] = tmp
-
append_parray
(self, ParticleArray parray, bool align=True, bool update_constants=False) → int¶ Add particles from a particle array
properties that are not there in self will be added
-
backend
¶ unicode
Type: backend
-
clear
(self)¶ Clear all data held by this array
-
constants
dict
Type: constants
-
copy_over_properties
(self, dict props)¶ Copy the properties from one set to another.
Parameters: props (dict) – A mapping between the properties to be copied. Examples
To save the properties ‘x’ and ‘y’ to say ‘x0’ and ‘y0’:
>>> pa.copy_over_properties(props = {'x':'x0', 'y':'y0'}
-
copy_properties
(self, ParticleArray source, long start_index=-1, long end_index=-1)¶ Copy properties from source to self
Parameters: - source (ParticleArray) – the particle array from where to copy.
- start_index (long) – the first particle in self which maps to the 0th particle in source
- end_index (long) – the index of first particle from start_index that is not copied
-
default_values
¶ dict
Type: default_values
-
empty_clone
(self, props=None) → ParticleArray¶ Creates an empty clone of the particle array
-
ensure_properties
(self, ParticleArray src, list props=None)¶ Ensure that the particle array has the same properties as the one given.
Note that this does not check for any constants but only properties.
If the optional props argument is passed it only checks for these.
-
extend
(self, int num_particles)¶ Increase the total number of particles by the requested amount
New particles are added at the end of the list, you will have to manually call align_particles later in order to update the number of particles.
-
extract_particles
(self, indices, ParticleArray dest_array=None, bool align=True, list props=None) → ParticleArray¶ Create new particle array for particles with indices in index_array
Parameters: - indices (list/array/LongArray) – indices of particles to be extracted (can be a LongArray or list/numpy array).
- dest_array (ParticleArray) – optional Particle array to populate. Note that this array should have the necessary properties. If none is passed a new particle array is created and returned.
- align (bool) – Specify if the destination particle array is to be aligned after particle extraction.
- props (list) – the list of properties to extract, if None all properties are extracted.
Notes
The algorithm is as follows:
- create a new particle array with the required properties.
- resize the new array to the desired length (index_array.length)
- copy the properties from the existing array to the new array.
-
get
(self, *args, only_real_particles=True)¶ Return the numpy array/constant for the property names in the arguments.
Parameters: only_real_particles (bool) – - indicates if properties of only real particles need to be
- returned or all particles to be returned. By default only real particles will be returned.
- args : additional args
- a list of property names.
Notes
The returned numpy array does NOT own its data. Other operations may be performed.
Returns: Return type: Numpy array.
-
get_carray
(self, unicode prop) → BaseArray¶ Return the c-array for the property or constant.
-
get_lb_props
(self)¶ Return the properties that are to be load balanced. If none are explicitly set by the user, return all of the properties.
-
get_number_of_particles
(self, bool real=False) → int¶ Return the number of particles
-
get_property_arrays
(self, all=True, only_real=True)¶ Return a dictionary of arrays held by the ParticleArray container.
This does not include the constants.
Parameters: - all (bint) – Flag to select all arrays
- only_real (bint) – Flag to select Local/Remote particles
Notes
The dictionary returned is keyed on the property name and the value is the NumPy array representing the data. If all is set to False, the list of arrays is determined by the output_property_arrays data attribute.
-
get_time
(self) → double¶
-
gpu
¶ object
Type: gpu
-
has_array
(self, unicode arr_name)¶ Returns true if the array arr_name is present
-
name
unicode
Type: name
-
num_real_particles
¶ ‘long’
Type: num_real_particles
-
output_property_arrays
¶ list
Type: output_property_arrays
-
properties
dict
Type: properties
-
property_arrays
¶ list
Type: property_arrays
-
remove_particles
(self, indices, align=True)¶ Remove particles whose indices are given in index_list.
We repeatedly interchange the values of the last element and values from the index_list and reduce the size of the array by one. This is done for every property that is being maintained.
Parameters: indices (array) – an array of indices, this array can be a list, numpy array or a LongArray. Notes
Pseudo-code for the implementation:
if index_list.length > number of particles raise ValueError sorted_indices <- index_list sorted in ascending order. for every every array in property_array array.remove(sorted_indices)
-
remove_property
(self, unicode prop_name)¶ Removes property prop_name from the particle array
-
remove_tagged_particles
(self, int tag, bool align=True)¶ Remove particles that have the given tag.
Parameters: tag (int) – the type of particles that need to be removed.
-
resize
(self, long size)¶ Resize all arrays to the new size. Note that this does not update the number of particles, as this just resizes the internal arrays. To do that, you need to call align_particles.
-
set
(self, **props)¶ Set properties from numpy arrays like objects
Parameters: props (dict) – a dictionary of properties containing the arrays to be set. Notes
- the properties being set must already be present in the properties dict.
- the size of the data should match the array already present.
-
set_device_helper
(self, gpu)¶ Set the device helper to push/pull from a hardware accelerator.
-
set_lb_props
(self, list lb_props)¶
-
set_name
(self, unicode name)¶
-
set_num_real_particles
(self, long value)¶
-
set_output_arrays
(self, list props)¶ Set the list of output arrays for this ParticleArray
Parameters: props (list) – The list of property arrays Notes
In PySPH, the solver obtains the list of property arrays to output by calling the ParticleArray.get_property_arrays method. If detailed output is not requested, the output_property_arrays attribute is used to determine the arrays that will be written to file
-
set_pid
(self, int pid)¶ Set the processor id for all particles
-
set_tag
(self, long tag_value, LongArray indices)¶ Set value of tag to tag_value for the particles in indices
-
set_time
(self, double time)¶
-
set_to_zero
(self, list props)¶
-
stride
¶ dict
Type: stride
-
time
¶ ‘double’
Type: time
-
update_backend
(self, backend=None)¶
-
update_min_max
(self, props=None)¶ Update the min,max values of all properties
-
-
pysph.base.particle_array.
get_ghost_tag
() → int¶
-
pysph.base.particle_array.
get_local_tag
() → int¶
-
pysph.base.particle_array.
get_remote_tag
() → int¶
-
pysph.base.particle_array.
is_ghost
(int tag) → bool¶
-
pysph.base.particle_array.
is_local
(int tag) → bool¶
-
pysph.base.particle_array.
is_remote
(int tag) → bool¶
Convenience functions to create particle arrays¶
There are several convenience functions that provide a particle array with a requisite set of particle properties that are documented below.
-
pysph.base.utils.
arange_long
(start, stop=-1)[source]¶ Creates a LongArray working same as builtin range with upto 2 arguments both expected to be positive
-
pysph.base.utils.
create_dummy_particles
(info)[source]¶ Returns a replica (empty) of a list of particles
-
pysph.base.utils.
get_particle_array
(additional_props=None, constants=None, backend=None, **props)[source]¶ Create and return a particle array with default properties.
The default properties are [‘x’, ‘y’, ‘z’, ‘u’, ‘v’, ‘w’, ‘m’, ‘h’, ‘rho’, ‘p’, ‘au’, ‘av’, ‘aw’, ‘gid’, ‘pid’, ‘tag’], this set is available in DEFAULT_PROPS.
Parameters: - additional_props (list) – If specified, add these properties.
- constants (dict) – Any constants to be added to the particle array.
Other Parameters: props (dict) – Additional keywords passed are set as the property arrays.
Examples
>>> x = linspace(0,1,10) >>> pa = get_particle_array(name='fluid', x=x) >>> pa.properties.keys() ['x', 'z', 'rho', 'pid', 'v', 'tag', 'm', 'p', 'gid', 'au', 'aw', 'av', 'y', 'u', 'w', 'h'] >>> pa1 = get_particle_array(name='fluid', additional_props=['xx', 'yy'])
>>> pa = get_particle_array(name='fluid', x=x, constants={'alpha': 1.0}) >>> pa.constants.keys() ['alpha']
-
pysph.base.utils.
get_particle_array_gasd
(constants=None, **props)[source]¶ Return a particle array for a Gas Dynamics problem.
Parameters: constants (dict) – Dictionary of constants Other Parameters: props (dict) – Additional keywords passed are set as the property arrays. See also
-
pysph.base.utils.
get_particle_array_iisph
(constants=None, **props)[source]¶ Get a particle array for the IISPH formulation.
The default properties are:
['x', 'y', 'z', 'u', 'v', 'w', 'm', 'h', 'rho', 'p', 'au', 'av', 'aw', 'gid', 'pid', 'tag' 'uadv', 'vadv', 'wadv', 'rho_adv', 'au', 'av', 'aw','ax', 'ay', 'az', 'dii0', 'dii1', 'dii2', 'V', 'aii', 'dijpj0', 'dijpj1', 'dijpj2', 'p', 'p0', 'piter', 'compression' ]
Parameters: constants (dict) – Dictionary of constants Other Parameters: props (dict) – Additional keywords passed are set as the property arrays. See also
-
pysph.base.utils.
get_particle_array_rigid_body
(constants=None, **props)[source]¶ Return a particle array for a rigid body motion.
For multiple bodies, add a body_id property starting at index 0 with each index denoting the body to which the particle corresponds to.
Parameters: constants (dict) – Dictionary of constants Other Parameters: props (dict) – Additional keywords passed are set as the property arrays. See also
-
pysph.base.utils.
get_particle_array_swe
(constants=None, **props)[source]¶ Return a particle array for the shallow water formulation
This sets the default properties to be:
['x', 'y', 'z', 'u', 'v', 'w', 'h', 'rho', 'arho' 'm', 'p', 'V', 'A', 'cs', 'n', 'rho0', 'rho_prev_iter', 'rho_residual', 'positive_rho_residual', 'summation_rho', 'dw', 'alpha', 'exp_lambda', 'tv', 'tu', 'au', 'av', 'u_prev_step', 'v_prev_step', 'uh', 'vh', 'dt_cfl', 'pa_to_split', 'Sfx', 'Sfy', 'psi', 'sum_Ak', 'u_parent', 'v_parent', 'uh_parent', 'vh_parent', 'parent_idx', 'b', 'bx', 'by', 'bxx', 'bxy', byy', 'closest_idx', 'is_merged_pa', 'merge', 'dw_inner_reimann', 'u_inner_reimann', 'v_inner_reimann', 'shep_corr', 'is_wall_boun_pa', 'dw_at_t', 'pa_out_of_domain', 'ob_pa_to_remove', 'ob_pa_to_tag', 'pa_alpha_zero', 'fluid_pa_to_remove']
Parameters: constants (dict) – Dictionary of constants Other Parameters: props (dict) – Additional keywords passed are set as the property arrays. See also
-
pysph.base.utils.
get_particle_array_tvf_fluid
(constants=None, **props)[source]¶ Return a particle array for the TVF formulation for a fluid.
Parameters: constants (dict) – Dictionary of constants Other Parameters: props (dict) – Additional keywords passed are set as the property arrays. See also
-
pysph.base.utils.
get_particle_array_tvf_solid
(constants=None, **props)[source]¶ Return a particle array for the TVF formulation for a solid.
Parameters: constants (dict) – Dictionary of constants Other Parameters: props (dict) – Additional keywords passed are set as the property arrays. See also
-
pysph.base.utils.
get_particle_array_wcsph
(constants=None, **props)[source]¶ Return a particle array for the WCSPH formulation.
This sets the default properties to be:
['x', 'y', 'z', 'u', 'v', 'w', 'h', 'rho', 'm', 'p', 'cs', 'ax', 'ay', 'az', 'au', 'av', 'aw', 'x0','y0', 'z0','u0', 'v0','w0', 'arho', 'rho0', 'div', 'gid','pid', 'tag']
Parameters: constants (dict) – Dictionary of constants Other Parameters: props (dict) – Additional keywords passed are set as the property arrays. See also
Module scheme¶
Abstract class to define the API for an SPH scheme. The idea is that one can define a scheme and thereafter one simply instantiates a suitable scheme, gives it a bunch of particles and runs the application.
-
class
pysph.sph.scheme.
ADKEScheme
(fluids, solids, dim, gamma=1.4, alpha=1.0, beta=2.0, k=1.0, eps=0.0, g1=0, g2=0, has_ghosts=False)[source]¶ Bases:
pysph.sph.scheme.Scheme
Parameters: - fluids (list) – a list with names of fluid particle arrays
- solids (list) – a list with names of solid (or boundary) particle arrays
- dim (int) – dimensionality of the problem
- gamma (double) – Gamma for equation of state
- alpha (double) – artificial viscosity parameter
- beta (double) – artificial viscosity parameter
- k (double) – kernel scaling parameter
- eps (double) – kernel scaling parameter
- g1 (double) – artificial heat conduction parameter
- g2 (double) – artificial heat conduction parameter
- has_ghosts (bool) – if problem uses ghost particles (periodic or mirror)
-
configure_solver
(kernel=None, integrator_cls=None, extra_steppers=None, **kw)[source]¶ Configure the solver to be generated.
Parameters: - kernel (Kernel instance.) – Kernel to use, if none is passed a default one is used.
- integrator_cls (pysph.sph.integrator.Integrator) – Integrator class to use, use sensible default if none is passed.
- extra_steppers (dict) – Additional integration stepper instances as a dict.
- **kw (extra arguments) – Any additional keyword args are passed to the solver instance.
-
class
pysph.sph.scheme.
AdamiHuAdamsScheme
(fluids, solids, dim, rho0, c0, nu, h0, gx=0.0, gy=0.0, gz=0.0, p0=0.0, gamma=7.0, tdamp=0.0, alpha=0.0)[source]¶ Bases:
pysph.sph.scheme.TVFScheme
This is a scheme similiar to that in the paper:
Adami, S., Hu, X., Adams, N. A generalized wall boundary condition for smoothed particle hydrodynamics. Journal of Computational Physics 2012;231(21):7057-7075.
The major difference is in how the equations are integrated. The paper has a different scheme that does not quite fit in with how things are done in PySPH readily so we simply use the WCSPHStep which works well.
-
attributes_changed
()[source]¶ Overload this to compute any properties that depend on others.
This is automatically called when configure is called.
-
configure_solver
(kernel=None, integrator_cls=None, extra_steppers=None, **kw)[source]¶ Configure the solver to be generated.
Parameters: - kernel (Kernel instance.) – Kernel to use, if none is passed a default one is used.
- integrator_cls (pysph.sph.integrator.Integrator) – Integrator class to use, use sensible default if none is passed.
- extra_steppers (dict) – Additional integration stepper instances as a dict.
- **kw (extra arguments) – Any additional keyword args are passed to the solver instance.
-
-
class
pysph.sph.scheme.
GSPHScheme
(fluids, solids, dim, gamma, kernel_factor, g1=0.0, g2=0.0, rsolver=2, interpolation=1, monotonicity=1, interface_zero=True, hybrid=False, blend_alpha=5.0, tf=1.0, niter=20, tol=1e-06, has_ghosts=False)[source]¶ Bases:
pysph.sph.scheme.Scheme
Parameters: - fluids (list) – List of names of fluid particle arrays.
- solids (list) – List of names of solid particle arrays (or boundaries), currently not supported
- dim (int) – Dimensionality of the problem.
- gamma (float) – Gamma for Equation of state.
- kernel_factor (float) – Kernel scaling factor.
- g2 (g1,) – ADKE style thermal conduction parameters
- rsolver (int) – Riemann solver to use. See pysph.sph.gas_dynamics.gsph for valid options.
- interpolation (int) – Kind of interpolation for the specific volume integrals.
- monotonicity (int) – Type of monotonicity algorithm to use: 0 : First order GSPH 1 : I02 algorithm # https://doi.org/10.1006/jcph.2002.7053 2 : IwIn algorithm # https://doi.org/10.1111/j.1365-2966.2011.19588.x # noqa: E501
- interface_zero (bool) – Set Interface position s^*_{ij} = 0 for the Riemann problem.
- blend_alpha (hybrid,) – Hybrid scheme and blending alpha value
- tf (double) – Final time used for blending.
- niter (int) – Max number of iterations for iterative Riemann solvers.
- tol (double) – Tolerance for iterative Riemann solvers.
- has_ghosts (bool) – if ghost particles (either mirror or periodic) is used
-
configure_solver
(kernel=None, integrator_cls=None, extra_steppers=None, **kw)[source]¶ Configure the solver to be generated.
Parameters: - kernel (Kernel instance.) – Kernel to use, if none is passed a default one is used.
- integrator_cls (pysph.sph.integrator.Integrator) – Integrator class to use, use sensible default if none is passed.
- extra_steppers (dict) – Additional integration stepper instances as a dict.
- **kw (extra arguments) – Any additional keyword args are passed to the solver instance.
-
class
pysph.sph.scheme.
GasDScheme
(fluids, solids, dim, gamma, kernel_factor, alpha1=1.0, alpha2=0.1, beta=2.0, adaptive_h_scheme='mpm', update_alpha1=False, update_alpha2=False, max_density_iterations=250, density_iteration_tolerance=0.001, has_ghosts=False)[source]¶ Bases:
pysph.sph.scheme.Scheme
Parameters: - fluids (list) – List of names of fluid particle arrays.
- solids (list) – List of names of solid particle arrays (or boundaries), currently not supported
- dim (int) – Dimensionality of the problem.
- gamma (float) – Gamma for Equation of state.
- kernel_factor (float) – Kernel scaling factor.
- alpha1 (float) – Artificial viscosity parameter.
- alpha2 (float) – Artificial viscosity parameter.
- beta (float) – Artificial viscosity parameter.
- adaptive_h_scheme (str) – Adaptive h scheme to use. One of [‘mpm’, ‘gsph’]
- update_alpha1 (bool) – Update the alpha1 parameter dynamically.
- update_alpha2 (bool) – Update the alpha2 parameter dynamically.
- max_density_iterations (int) – Maximum number of iterations to run for one density step
- density_iteration_tolerance (float) – Maximum difference allowed in two successive density iterations
- has_ghosts (bool) – if ghost particles (either mirror or periodic) is used
-
configure_solver
(kernel=None, integrator_cls=None, extra_steppers=None, **kw)[source]¶ Configure the solver to be generated.
Parameters: - kernel (Kernel instance.) – Kernel to use, if none is passed a default one is used.
- integrator_cls (pysph.sph.integrator.Integrator) – Integrator class to use, use sensible default if none is passed.
- extra_steppers (dict) – Additional integration stepper instances as a dict.
- **kw (extra arguments) – Any additional keyword args are passed to the solver instance.
-
class
pysph.sph.scheme.
Scheme
(fluids, solids, dim)[source]¶ Bases:
object
An API for an SPH scheme.
Parameters: - fluids (list) – List of names of fluid particle arrays.
- solids (list) – List of names of solid particle arrays (or boundaries).
- dim (int) – Dimensionality of the problem.
-
attributes_changed
()[source]¶ Overload this to compute any properties that depend on others.
This is automatically called when configure is called.
-
configure
(**kw)[source]¶ Configure the scheme with given parameters.
Overload this to do any scheme specific stuff.
-
configure_solver
(kernel=None, integrator_cls=None, extra_steppers=None, **kw)[source]¶ Configure the solver to be generated.
Parameters: - kernel (Kernel instance.) – Kernel to use, if none is passed a default one is used.
- integrator_cls (pysph.sph.integrator.Integrator) – Integrator class to use, use sensible default if none is passed.
- extra_steppers (dict) – Additional integration stepper instances as a dict.
- **kw (extra arguments) – Any additional keyword args are passed to the solver instance.
-
class
pysph.sph.scheme.
SchemeChooser
(default, **schemes)[source]¶ Bases:
pysph.sph.scheme.Scheme
Parameters: - default (str) – Name of the default scheme to use.
- **schemes (kwargs) – The schemes to choose between.
-
attributes_changed
()[source]¶ Overload this to compute any properties that depend on others.
This is automatically called when configure is called.
-
configure
(**kw)[source]¶ Configure the scheme with given parameters.
Overload this to do any scheme specific stuff.
-
configure_solver
(kernel=None, integrator_cls=None, extra_steppers=None, **kw)[source]¶ Configure the solver to be generated.
Parameters: - kernel (Kernel instance.) – Kernel to use, if none is passed a default one is used.
- integrator_cls (pysph.sph.integrator.Integrator) – Integrator class to use, use sensible default if none is passed.
- extra_steppers (dict) – Additional integration stepper instances as a dict.
- **kw (extra arguments) – Any additional keyword args are passed to the solver instance.
-
class
pysph.sph.scheme.
TVFScheme
(fluids, solids, dim, rho0, c0, nu, p0, pb, h0, gx=0.0, gy=0.0, gz=0.0, alpha=0.0, tdamp=0.0)[source]¶ Bases:
pysph.sph.scheme.Scheme
-
configure_solver
(kernel=None, integrator_cls=None, extra_steppers=None, **kw)[source]¶ Configure the solver to be generated.
Parameters: - kernel (Kernel instance.) – Kernel to use, if none is passed a default one is used.
- integrator_cls (pysph.sph.integrator.Integrator) – Integrator class to use, use sensible default if none is passed.
- extra_steppers (dict) – Additional integration stepper instances as a dict.
- **kw (extra arguments) – Any additional keyword args are passed to the solver instance.
-
-
class
pysph.sph.scheme.
WCSPHScheme
(fluids, solids, dim, rho0, c0, h0, hdx, gamma=7.0, gx=0.0, gy=0.0, gz=0.0, alpha=0.1, beta=0.0, delta=0.1, nu=0.0, tensile_correction=False, hg_correction=False, update_h=False, delta_sph=False, summation_density=False)[source]¶ Bases:
pysph.sph.scheme.Scheme
Parameters: - fluids (list) – List of names of fluid particle arrays.
- solids (list) – List of names of solid particle arrays (or boundaries).
- dim (int) – Dimensionality of the problem.
- rho0 (float) – Reference density.
- c0 (float) – Reference speed of sound.
- gamma (float) – Gamma for the equation of state.
- h0 (float) – Reference smoothing length.
- hdx (float) – Ratio of h/dx.
- gy, gz (gx,) – Body force acceleration components.
- alpha (float) – Coefficient for artificial viscosity.
- beta (float) – Coefficient for artificial viscosity.
- delta (float) – Coefficient used to control the intensity of diffusion of density
- nu (float) – Real viscosity of the fluid, defaults to no viscosity.
- tensile_correction (bool) – Use tensile correction.
- hg_correction (bool) – Use the Hughes-Graham correction.
- update_h (bool) – Update the smoothing length as per Ferrari et al.
- delta_sph (bool) – Use the delta-SPH correction terms.
- summation_density (bool) – Use summation density instead of continuity.
References
[Hughes2010] J. P. Hughes and D. I. Graham, “Comparison of incompressible and weakly-compressible SPH models for free-surface water flows”, Journal of Hydraulic Research, 48 (2010), pp. 105-117. [Marrone2011] S. Marrone et al., “delta-SPH model for simulating violent impact flows”, Computer Methods in Applied Mechanics and Engineering, 200 (2011), pp 1526–1542. [Cherfils2012] J. M. Cherfils et al., “JOSEPHINE: A parallel SPH code for free-surface flows”, Computer Physics Communications, 183 (2012), pp 1468–1480. -
configure_solver
(kernel=None, integrator_cls=None, extra_steppers=None, **kw)[source]¶ Configure the solver to be generated.
Parameters: - kernel (Kernel instance.) – Kernel to use, if none is passed a default one is used.
- integrator_cls (pysph.sph.integrator.Integrator) – Integrator class to use, use sensible default if none is passed.
- extra_steppers (dict) – Additional integration stepper instances as a dict.
- **kw (extra arguments) – Any additional keyword args are passed to the solver instance.
-
class
pysph.sph.gas_dynamics.magma2.
MAGMA2Scheme
(fluids, solids, dim, gamma, hfact=None, fkern=1.0, adaptive_h_scheme='magma2', max_density_iterations=250, density_iteration_tolerance=0.001, alphamax=1.0, alphamin=0.1, alphac=0.05, beta=2.0, eps=0.01, eta_crit=0.3, eta_fold=0.2, ndes=None, reconstruction_order=2, formulation='mi1', recycle_accelerations=True, has_ghosts=False, l0=-9.210340371976182, l1=-2.995732273553991)[source]¶ Bases:
pysph.sph.scheme.Scheme
MAGMA2 formulations.
Set of Equations: [Rosswog2020b]
Dissipation Limiter: [Rosswog2020a]
Parameters: - fluids (list) – List of names of fluid particle arrays.
- solids (list) – List of names of solid particle arrays (or boundaries), currently not supported
- dim (int) – Dimensionality of the problem.
- gamma (float) – \(\gamma\) for Equation of state.
- hfact (float) – \(h_{fact}\) for smoothing length adaptivity, also referred to as kernel_factor in other schemes like AKDE, MPM, GSPH.
- formulation (str, optional) – Set of governing equations for momentum and energy. Should be one of {‘stdgrad’, ‘mi1’, ‘mi2’}, by default ‘mi1’.
- adaptive_h_scheme (str, optional) – Procedure to adapt smoothing lengths. Should be one of {‘gadget2’, ‘mpm’}, by default ‘gadget2’.
- max_density_iterations (int, optional) – Maximum number of iterations to run for one density step if using MPM procedure to adapt smoothing lengths, by default 2.0
- density_iteration_tolerance (float, optional) – Maximum difference allowed in two successive density iterations if using MPM procedure to adapt smoothing lengths, by default 1e-3.
- alphamax (float, optional) – \(\alpha_{max}\) for artificial viscosity switch, by default 1.0
- alphamin (float, optional) – \(\alpha_{0}\) for artificial viscosity switch, by default 0.1
- alphac (float, optional) – \(\alpha_{u}\) for artificial conductivity, by default 0.05
- beta (float, optional) – \(\beta\) for artificial viscosity, by default 2.0
- eps (float, optional) – Numerical parameter often used in denominator to avoid division by zero, by default 0.01
- eta_crit (float, optional) – \(\eta_{crit}\) for slope limiter, by default None
- eta_fold (float, optional) – \(\eta_{fold}\) for slope limiter, by default 0.2
- fkern (float, optional) – \(f_{kern}\), Factor to scale smoothing length for equivalence when using kernel with altered radius_scale, by default 1.0.
- ndes (int, optional) – \(n_{des}\), Desired number of neighbours to be in the kernel support of each particle, by default 300 for 3D.
- reconstruction_order (int, optional) – Order of reconstruction, by default 2.
- recycle_accelerations (bool, optional) – Weather to recycle accelerations, i.e., weather the accelerations used in the correction step can be reused in the successive prediction step, by default True.
- has_ghosts (bool, optional) – If ghost particles (either mirror or periodic) is used, by default False.
- l0 (float, optional) – Low entropy threshold parameter for dissipation trigger, by default log(1e-4).
- l1 (float, optional) – High entropy threshold parameter for dissipation trigger, by default log(5e-2).
Module solver¶
An implementation of a general solver base class
-
class
pysph.solver.solver.
Solver
(dim=2, integrator=None, kernel=None, n_damp=0, tf=1.0, dt=0.001, adaptive_timestep=False, cfl=0.3, output_at_times=(), fixed_h=False, **kwargs)[source]¶ Bases:
object
Base class for all PySPH Solvers
Constructor
Any additional keyword args are used to set the values of any of the attributes.
Parameters: - dim (int) – Dimension of the problem
- integrator (pysph.sph.integrator.Integrator) – Integrator to use
- kernel (pysph.base.kernels.Kernel) – SPH kernel to use
- n_damp (int) – Number of timesteps for which the initial damping is required. This is used to improve stability for problems with strong discontinuity in initial condition. Setting it to zero will disable damping of the timesteps.
- dt (double) – Suggested initial time step for integration
- tf (double) – Final time for integration
- adaptive_timestep (bint) – Flag to use adaptive time steps
- cfl (double) – CFL number for adaptive time stepping
- pfreq (int) – Output files dumping frequency.
- output_at_times (list/array) – Optional list of output times to force dump the output file
- fixed_h (bint) – Flag for constant smoothing lengths h
- reorder_freq (int) – The number of iterations after which particles should be re-ordered. If zero, do not do this.
Example
>>> integrator = PECIntegrator(fluid=WCSPHStep()) >>> kernel = CubicSpline(dim=2) >>> solver = Solver(dim=2, integrator=integrator, kernel=kernel, ... n_damp=50, tf=1.0, dt=1e-3, adaptive_timestep=True, ... pfreq=100, cfl=0.5, output_at_times=[1e-1, 1.0])
-
add_post_stage_callback
(callback)[source]¶ These callbacks are called after each integrator stage.
The callbacks are passed (current_time, dt, stage). See the the Integrator.one_timestep methods for examples of how this is called.
Example
>>> def post_stage_callback_function(t, dt, stage): >>> # This function is called after every stage of integrator. >>> print(t, dt, stage) >>> # Do something >>> solver.add_post_stage_callback(post_stage_callback_function)
-
add_post_step_callback
(callback)[source]¶ These callbacks are called after each timestep is performed.
The callbacks are passed the solver instance (i.e. self).
Example
>>> def post_step_callback_function(solver): >>> # This function is called after every time step. >>> print(solver.t, solver.dt) >>> # Do something >>> solver.add_post_step_callback(post_step_callback_function)
-
add_pre_step_callback
(callback)[source]¶ These callbacks are called before each timestep is performed.
The callbacks are passed the solver instance (i.e. self).
Example
>>> def pre_step_callback_function(solver): >>> # This function is called before every time step. >>> print(solver.t, solver.dt) >>> # Do something >>> solver.add_pre_step_callback(pre_step_callback_function)
-
dump_output
(**kwargs)¶ Dump the simulation results to file
The arrays used for printing are determined by the particle array’s output_property_arrays data attribute. For debugging it is sometimes nice to have all the arrays (including accelerations) saved. This can be chosen from using the command line option –detailed-output
Output data Format:
A single file named as: <fname>_<rank>_<iteration_count>.npz
The data is saved as a Python dictionary with two keys:
solver_data : Solver meta data like time, dt and iteration number
- arrays : A dictionary keyed on particle array names and with
- particle properties as value.
Example:
You can load the data output by PySPH like so:
>>> from pysph.solver.utils import load >>> data = load('output_directory/filename_x_xxx.npz') >>> solver_data = data['solver_data'] >>> arrays = data['arrays'] >>> fluid = arrays['fluid'] >>> ...
In the above example, it is assumed that the output file contained an array named fluid.
-
load_output
(count)[source]¶ Load particle data from dumped output file.
Parameters: count (str) – The iteration time from which to load the data. If time is ‘?’ then list of available data files is returned else the latest available data file is used Notes
Data is loaded from the
output_directory
using the same format as stored by thedump_output()
method. Proper functioning required that all the relevant properties of arrays be dumped.
-
reorder_particles
(**kwargs)¶ Re-order particles so as to coalesce memory access.
-
set_adaptive_timestep
(value)[source]¶ Set it to True to use adaptive timestepping based on cfl, viscous and force factor.
Look at pysph.sph.integrator.compute_time_step for more details.
-
set_command_handler
(callable, command_interval=1)[source]¶ set the callable to be called at every command_interval iteration
the callable is called with the solver instance as an argument
-
set_n_damp
(ndamp)[source]¶ Set the number of timesteps for which the timestep should be initially damped.
-
set_parallel_output_mode
(mode='collected')[source]¶ Set the default solver dump mode in parallel.
The available modes are:
- collected : Collect array data from all processors on root and
- dump a single file.
distributed : Each processor dumps a file locally.
-
setup
(particles, equations, nnps, kernel=None, fixed_h=False)[source]¶ Setup the solver.
The solver’s processor id is set if the in_parallel flag is set to true.
The order of the integrating calcs is determined by the solver’s order attribute.
This is usually called at the start of a PySPH simulation.
Module solver tools¶
-
class
pysph.solver.tools.
DensityCorrection
(app, arr_names, corr='shepard', freq=10, kernel=None)[source]¶ Bases:
pysph.solver.tools.Tool
A tool to reinitialize the density of the fluid particles
Parameters: - app (pysph.solver.application.Application.) – The application instance.
- arr_names (array) – Names of the particle arrays whose densities needs to be reinitialized.
- corr (str) – Name of the density reinitialization operation. corr=’shepard’ for using zeroth order shepard filter
- freq (int) – Frequency of reinitialization.
- kernel (any kernel from pysph.base.kernels) –
-
class
pysph.solver.tools.
SimpleRemesher
(app, array_name, props, freq=100, xi=None, yi=None, zi=None, kernel=None, equations=None)[source]¶ Bases:
pysph.solver.tools.Tool
A simple tool to periodically remesh a given array of particles onto an initial set of points.
Constructor.
Parameters: - app (pysph.solver.application.Application) – The application instance.
- array_name (str) – Name of the particle array that needs to be remeshed.
- props (list(str)) – List of properties to interpolate.
- freq (int) – Frequency of remeshing operation.
- yi, zi (xi,) – Positions to remesh the properties onto. If not specified they are taken from the particle arrays at the time of construction.
- kernel (any kernel from pysph.base.kernels) –
- equations (list or None) – Equations to use for the interpolation, passed to the interpolator.
-
class
pysph.solver.tools.
Tool
[source]¶ Bases:
object
A tool is typically an object that can be used to perform a specific task on the solver’s pre_step/post_step or post_stage callbacks. This can be used for a variety of things. For example, one could save a plot, print debug statistics or perform remeshing etc.
To create a new tool, simply subclass this class and overload any of its desired methods.
-
post_stage
(current_time, dt, stage)[source]¶ If overloaded, this is called automatically after each integrator stage, i.e. if the integrator is a two stage integrator it will be called after the first and second stages.
The method is passed (current_time, dt, stage). See the the Integrator.one_timestep methods for examples of how this is called.
-
Module boundary conditions¶
Inlet Outlet Manager
-
class
pysph.sph.bc.inlet_outlet_manager.
CopyNormalsandDistances
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Copy normals and distances from outlet/inlet particles to ghosts
Parameters: - dest (str) – name of the destination particle array
- sources (list of str or None) – names of the source particle arrays
-
class
pysph.sph.bc.inlet_outlet_manager.
IOEvaluate
(dest, sources, x, y, z, xn, yn, zn, maxdist=1000.0)[source]¶ Bases:
pysph.sph.equation.Equation
- Compute ioid for the particles
- 0 : particle is in fluid 1 : particle is inside the inlet/outlet 2 : particle is out of inlet/outlet
Parameters: - dest (str) – destination particle array name
- sources (list) – List of source particle arrays
- x (float) – x coordinate of interface point
- y (float) – y coordinate of interface point
- z (float) – z coordinate of interface point
- xn (float) – x component of interface outward normal
- yn (float) – y component of interface outward normal
- zn (float) – z component of interface outward normal
- maxdist (float) – Maximum length of inlet/outlet
-
class
pysph.sph.bc.inlet_outlet_manager.
InletBase
(inlet_pa, dest_pa, inletinfo, kernel, dim, active_stages=[1], callback=None, ghost_pa=None)[source]¶ Bases:
object
An API to add/delete particle when moving between inlet-fluid
Parameters: - inlet_pa (particle_array) – particle array for inlet
- dest_pa (particle_array) – particle_array of the fluid
- inletinfo (InletInfo instance) – contains information fo inlet
- kernel (Kernel instance) – Kernel to be used for computations
- dim (int) – dimension of the problem
- active_stages (list) – stages of integrator at which update should be active
- callback (function) – callback after the update function
- ghost_pa (particle_array) – particle_array of the ghost_inlet
-
class
pysph.sph.bc.inlet_outlet_manager.
InletInfo
(pa_name, normal, refpoint, has_ghost=True, update_cls=None, equations=None, umax=1.0, props_to_copy=None)[source]¶ Bases:
object
- Create object with information of inlets, all the others parameters which
- are not passed here get evaluated by InletOutletManager once the inlet is created.
Parameters: - pa_name (str) – Name of the inlet
- normal (list) – Components of normal (float)
- refpoint (list) – Point at the fluid-inlet interface (float)
- has_ghost (bool) – if True, the ghost particles will be created
- update_cls (class_name) – the class which is to be used to update the inlet/outlet
- equations (list) – List of equations (optional)
- props_to_copy (array) – properties to copy
-
class
pysph.sph.bc.inlet_outlet_manager.
InletOutletManager
(fluid_arrays, inletinfo, outletinfo, extraeqns=None)[source]¶ Bases:
object
- Create the object to manage inlet outlet boundary conditions.
- Most of the variables are evaluated after the scheme and particles are created.
Parameters: - fluid_arrays (list) – List of fluid particles array names (str)
- inletinfo (list) – List of inlets (InletInfo)
- outletinfo (list) – List of outlet (OutletInfo)
- extraeqns (dict) – Dict of custom equations
-
add_io_properties
(pa, scheme=None)[source]¶ Add properties to be used in inlet/outlet equations
Parameters: - pa (particle_array) – Particle array of inlet/outlet
- scheme (pysph.sph.scheme) – The insance of scheme class
-
create_ghost
(pa_arr, inlet=True)[source]¶ Creates ghosts for the given inlet/outlet particles
Parameters: - pa_arr (Particle array) – particles array for which ghost is required
- inlet (bool) – if True, inlet info will be used for ghost creation
-
get_equations
(scheme, **kw)[source]¶ Returns the equations for inlet/outlet
Parameters: - scheme (pysph.sph.scheme) – The instance of the scheme class
- **kw (extra arguments) – Extra arguments depending upon the scheme used
-
get_equations_post_compute_acceleration
()[source]¶ Returns the equations for inlet/outlet used post acceleration computation
-
get_inlet_outlet
(particle_array)[source]¶ - Returns list of Inlet and Outlet instances which
- performs the change in inlet particles to outlet particles.
Parameters: particle_array (list) – List of all particle_arrays
-
get_io_names
(ghost=False)[source]¶ return all the names of inlets and outlets :param ghost: if True, return the names of ghost also :type ghost: bool
-
get_stepper
(scheme, integrator, **kw)[source]¶ Returns the steppers for inlet/outlet
Parameters: - scheme (pysph.sph.scheme) – The instance of the scheme class
- intergrator (pysph.sph.integrator) – The parent class of the integrator
- **kw (extra arguments) – Extra arguments depending upon the scheme used
-
class
pysph.sph.bc.inlet_outlet_manager.
OutletBase
(outlet_pa, source_pa, outletinfo, kernel, dim, active_stages=[1], callback=None, ghost_pa=None)[source]¶ Bases:
object
An API to add/delete particle when moving between fluid-outlet
Parameters: - outlet_pa (particle_array) – particle array for outlet
- source_pa (particle_array) – particle_array of the fluid
- ghost_pa (particle_array) – particle_array of the outlet ghost
- outletinfo (OutletInfo instance) – contains information fo outlet
- kernel (Kernel instance) – Kernel to be used for computations
- dim (int) – dimnesion of the problem
- active_stages (list) – stages of integrator at which update should be active
- callback (function) – callback after the update function
-
class
pysph.sph.bc.inlet_outlet_manager.
OutletInfo
(pa_name, normal, refpoint, has_ghost=False, update_cls=None, equations=None, umax=1.0, props_to_copy=None)[source]¶ Bases:
pysph.sph.bc.inlet_outlet_manager.InletInfo
Create object with information of outlet
The name is kept different for distinction only.
-
class
pysph.sph.bc.inlet_outlet_manager.
UpdateNormalsAndDisplacements
(dest, sources, xn, yn, zn, xo, yo, zo)[source]¶ Bases:
pysph.sph.equation.Equation
Update normal and perpendicular distance from the interface for the inlet/outlet particles
Parameters: - dest (str) – destination particle array name
- sources (list) – List of source particle arrays
- xn (float) – x component of interface outward normal
- yn (float) – y component of interface outward normal
- zn (float) – z component of interface outward normal
- xo (float) – x coordinate of interface point
- yo (float) – y coordinate of interface point
- zo (float) – z coordinate of interface point
Module solver_interfaces¶
-
class
pysph.solver.solver_interfaces.
CommandlineInterface
[source]¶ Bases:
object
command-line interface to the solver controller
-
class
pysph.solver.solver_interfaces.
CrossDomainXMLRPCRequestHandler
(*args, directory=None, **kwargs)[source]¶ Bases:
xmlrpc.server.SimpleXMLRPCRequestHandler
,http.server.SimpleHTTPRequestHandler
SimpleXMLRPCRequestHandler subclass which attempts to do CORS
CORS is Cross-Origin-Resource-Sharing (http://www.w3.org/TR/cors/) which enables xml-rpc calls from a different domain than the xml-rpc server (such requests are otherwise denied)
-
class
pysph.solver.solver_interfaces.
MultiprocessingClient
(address=None, authkey=None, serializer='pickle', start=True)[source]¶ Bases:
multiprocessing.managers.BaseManager
A client for the multiprocessing interface
Override the run() method to do appropriate actions on the proxy instance of the controller object or add an interface using the add_interface methods similar to the Controller.add_interface method
-
class
pysph.solver.solver_interfaces.
MultiprocessingInterface
(address=None, authkey=None)[source]¶ Bases:
multiprocessing.managers.BaseManager
A multiprocessing interface to the solver controller
This object exports a controller instance proxy over the multiprocessing interface. Control actions can be performed by connecting to the interface and calling methods on the controller proxy instance
-
class
pysph.solver.solver_interfaces.
XMLRPCInterface
(addr, requestHandler=<class 'pysph.solver.solver_interfaces.CrossDomainXMLRPCRequestHandler'>, logRequests=True, allow_none=True, encoding=None, bind_and_activate=True)[source]¶ Bases:
xmlrpc.server.SimpleXMLRPCServer
An XML-RPC interface to the solver controller
Currently cannot work with objects which cannot be marshalled (which is basically most custom classes, most importantly ParticleArray and numpy arrays)
Miscellaneous Tools for PySPH¶
Input/Output of data files¶
The following functions are handy functions when processing output generated by PySPH or to generate new files.
-
pysph.solver.utils.
dump
(filename, particles, solver_data, detailed_output=False, only_real=True, mpi_comm=None, compress=False)[source]¶ Dump the given particles and solver data to the given filename.
Parameters: - filename (str) – Filename to dump to.
- particles (sequence(ParticleArray)) – Sequence of particle arrays to dump.
- solver_data (dict) – Additional information to dump about solver state.
- detailed_output (bool) – Specifies if all arrays should be dumped.
- only_real (bool) – Only dump the real particles.
- mpi_comm (mpi4pi.MPI.Intracomm) – An MPI communicator to use for parallel commmunications.
- compress (bool) – Specify if the file is to be compressed or not.
- mpi_comm is not passed or is set to None the local particles alone (If) –
- dumped, otherwise only rank 0 dumps the output. (are) –
-
pysph.solver.utils.
get_files
(dirname=None, fname=None, endswith=('hdf5', 'npz'))[source]¶ Get all solution files in a given directory, dirname.
Parameters: - dirname (str) – Name of directory.
- fname (str) – An initial part of the filename, if not specified use the first part of the dirname.
- endswith (str) – The extension of the file to load.
-
pysph.solver.utils.
load
(fname)[source]¶ Load the output data
Parameters: fname (str) – Name of the file or full path Examples
>>> data = load('elliptical_drop_100.npz') >>> data.keys() ['arrays', 'solver_data'] >>> arrays = data['arrays'] >>> arrays.keys() ['fluid'] >>> fluid = arrays['fluid'] >>> type(fluid) pysph.base.particle_array.ParticleArray >>> data['solver_data'] {'count': 100, 'dt': 4.6416394784204199e-05, 't': 0.0039955855395528766}
-
pysph.solver.utils.
load_and_concatenate
(prefix, nprocs=1, directory='.', count=None)[source]¶ Load the results from multiple files.
Given a filename prefix and the number of processors, return a concatenated version of the dictionary returned via load.
Parameters: - prefix (str) – A filename prefix for the output file.
- nprocs (int) – The number of processors (files) to read
- directory (str) – The directory for the files
- count (int) – The file iteration count to read. If None, the last available one is read
Dump XDMF¶
Interpolator¶
This module provides a convenient class called
interpolator.Interpolator
which can be used to interpolate any
scalar values from the points onto either a mesh or a collection of other
points. SPH interpolation is performed with a simple Shepard filtering.
-
class
pysph.tools.interpolator.
InterpolateFunction
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Parameters: - dest (str) – name of the destination particle array
- sources (list of str or None) – names of the source particle arrays
-
class
pysph.tools.interpolator.
InterpolateSPH
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Parameters: - dest (str) – name of the destination particle array
- sources (list of str or None) – names of the source particle arrays
-
class
pysph.tools.interpolator.
Interpolator
(particle_arrays, num_points=125000, kernel=None, x=None, y=None, z=None, domain_manager=None, equations=None, method='shepard')[source]¶ Bases:
object
Convenient class to interpolate particle properties onto a uniform grid or given set of particles. This is particularly handy for visualization.
The x, y, z coordinates need not be specified, and if they are not, the bounds of the interpolated domain is automatically computed and num_points number of points are used in this domain uniformly placed.
Parameters: - particle_arrays (list) – A list of particle arrays.
- num_points (int) – the number of points to interpolate on to.
- kernel (Kernel) – the kernel to use for interpolation.
- x (ndarray) – the x-coordinate of points on which to interpolate.
- y (ndarray) – the y-coordinate of points on which to interpolate.
- z (ndarray) – the z-coordinate of points on which to interpolate.
- domain_manager (DomainManager) – An optional Domain manager for periodic domains.
- equations (sequence) – A sequence of equations or groups. Defaults to None. This is used only if the default interpolation equations are inadequate.
- method (str) – String with the following allowed methods: ‘shepard’, ‘sph’, ‘order1’
-
interpolate
(prop, comp=0)[source]¶ Interpolate given property.
Parameters: - prop (str) – The name of the property to interpolate.
- comp (int) – The component of the gradient required
Returns: Return type: A numpy array suitably shaped with the property interpolated.
-
set_domain
(bounds, shape)[source]¶ Set the domain to interpolate into.
Parameters: - bounds (tuple) – (xmin, xmax, ymin, ymax, zmin, zmax)
- shape (tuple) – (nx, ny, nz)
-
set_interpolation_points
(x=None, y=None, z=None)[source]¶ Set the points on which we must interpolate the arrays.
If any of x, y, z is not passed it is assumed to be 0.0 and shaped like the other non-None arrays.
Parameters: - x (ndarray) – the x-coordinate of points on which to interpolate.
- y (ndarray) – the y-coordinate of points on which to interpolate.
- z (ndarray) – the z-coordinate of points on which to interpolate.
-
update
(update_domain=True)[source]¶ Update the NNPS when particles have moved.
If the update_domain is False, the domain is not updated.
Use this when the arrays are the same but the particles have themselves changed. If the particle arrays themselves change use the update_particle_arrays method instead.
-
update_particle_arrays
(particle_arrays)[source]¶ Call this for a new set of particle arrays which have the same properties as before.
For example, if you are reading the particle array data from files, each time you load a new file a new particle array is read with the same properties. Call this function to reset the arrays.
-
class
pysph.tools.interpolator.
SPHFirstOrderApproximation
(dest, sources, dim=1)[source]¶ Bases:
pysph.sph.equation.Equation
First order SPH approximation.
The method used to solve the linear system in this function is not same as in the reference. In the function \(Ax=b\) is solved where \(A := moment\) (Moment matrix) and \(b := p_sph\) (Property calculated using basic SPH). The calculation need the “moment” to be evaluated before this step which is done in SPHFirstOrderApproximationPreStep
References
[Liu2006] M.B. Liu, G.R. Liu, “Restoring particle consistency in smoothed particle hydrodynamics”, Applied Numerical Mathematics Volume 56, Issue 1 2006, Pages 19-36, ISSN 0168-9274
-
class
pysph.tools.interpolator.
SPHFirstOrderApproximationPreStep
(dest, sources, dim=1)[source]¶ Bases:
pysph.sph.equation.Equation
-
pysph.tools.interpolator.
get_bounding_box
(particle_arrays, tight=False, stretch=0.05)[source]¶ Find the size of the domain given a sequence of particle arrays.
If tight is True, the bounds are tight, if not the domain is stretched along each dimension by an amount stretch specified as a percentage of the length along that dimension is added in each dimension.
SPH Evaluator¶
This module provides a class that allows one to evaluate a set of equations on a collection of particle arrays. This is very handy for non-trivial post-processing that needs to be quick.
A convenience class that combines an AccelerationEval and an SPHCompiler to allow a user to specify particle arrays, equations, an optional domain and kernel to produce an SPH evaluation.
This is handy for post-processing.
-
class
pysph.tools.sph_evaluator.
SPHEvaluator
(arrays, equations, dim, kernel=None, domain_manager=None, backend=None, nnps_factory=<class 'pysph.base.linked_list_nnps.LinkedListNNPS'>)[source]¶ Bases:
object
Constructor.
Parameters: - arrays (list(ParticleArray)) –
- equations (list) –
- dim (int) –
- kernel (kernel instance.) –
- domain_manager (DomainManager) –
- backend (str: indicates the backend to use.) – one of (‘opencl’, ‘cython’, ‘’, None)
- nnps_factory (A factory that creates an NNPSBase instance.) –
-
update
(update_domain=True)[source]¶ Update the NNPS when particles have moved.
If the update_domain is False, the domain is not updated.
Use this when the arrays are the same but the particles have themselves changed. If the particle arrays themselves change use the update_particle_arrays method instead.
-
update_particle_arrays
(arrays)[source]¶ Call this for a new set of particle arrays which have the same properties as before.
For example, if you are reading the particle array data from files, each time you load a new file a new particle array is read with the same properties. Call this function to reset the arrays.
Mesh Converter¶
The following functions can be used to convert a mesh file supported by meshio to a set of surface points.
Particle Packer¶
The following functions can be used to create a domain with particle packed around a solid surface in both 2D and 3D.
-
pysph.tools.geometry.
get_packed_periodic_packed_particles
(add_opt_func, folder, dx, L, B, H=0, dim=2, dfreq=-1, pb=None, nu=None, k=None, tol=0.01)[source]¶ Creates a periodic packed 2D or 3D domain. It creates particles which are not aligned but packed such that the number density is uniform.
Parameters: - add_opt_func (options function from the parent Application class) –
- folder (Application class output directory) –
- dx (float) – required particle spacing
- L (float) – length of the domain
- B (float) – Width of the domain
- H (float) – Height of the domain
- dim (int) – dimensionality of the problem
- dfreq (int) – projection frequency of particles
- pb (float) – background pressure (default: 1.0)
- nu (float) – viscosity coefficient (default: 0.3/dx)
- k (float) – coefficient of repulsion (default: 0.005*dx)
- tol (float) – tolerance value for convergence (default: 1e-2)
Returns: - xs (float) – x coordinate of solid particles
- ys (float) – y coordinate of solid particles
- zs (float) – z coordinate of solid particles
- xf (float) – x coordinate of fluid particles
- yf (float) – y coordinate of fluid particles
- zf (float) – z coordinate of fluid particles
-
pysph.tools.geometry.
get_packed_2d_particles_from_surface_coordinates
(add_opt_func, folder, dx, x, y, pb=None, nu=None, k=None, scale=1.0, shift=False, dfreq=-1, invert_normal=False, hardpoints=None, use_prediction=False, filter_layers=False, reduce_dfreq=False, tol=0.01)[source]¶ Creates a packed configuration of particles around the given coordinates of a 2D geometry.
Parameters: - add_opt_func (method) – options function from the parent Application class
- folder (string) – Application class output directory
- dx (float) – required particle spacing
- x (array) – x coordinates of the geometry
- y (array) – y coordinates of the geometry
- pb (float) – background pressure (default: 1.0)
- nu (float) – viscosity coefficient (default: 0.3/dx)
- k (float) – coefficient of repulsion (default: 0.005*dx)
- scale (float) – the scaling factor for the coordinates
- dfreq (int) – projection frequency of particles
- invert_normal (bool) – if True the computed normals are inverted
- hardpoints (dict) – the dictionary of hardpoints
- use_prediction (bool) – if True, points are projected quickly to reach prediction points
- filter_layers (bool) – if True, particles away from boundary are frozen
- reduce_dfreq (bool) – if True, reduce projection frequency
- tol (float) – tolerance value for convergence (default: 1e-2)
Returns: - xs (float) – x coordinate of solid particles
- ys (float) – y coordinate of solid particles
- zs (float) – z coordinate of solid particles
- xf (float) – x coordinate of fluid particles
- yf (float) – y coordinate of fluid particles
- zf (float) – z coordinate of fluid particles
-
pysph.tools.geometry.
get_packed_2d_particles_from_surface_file
(add_opt_func, folder, dx, filename, pb=None, nu=None, k=None, scale=1.0, shift=False, dfreq=-1, invert_normal=False, hardpoints=None, use_prediction=False, filter_layers=False, reduce_dfreq=False, tol=0.01)[source]¶ Creates a packed configuration of particles around the given geometry file containing the x, y coordinates.
Parameters: - add_opt_func (method) – options function from the parent Application class
- folder (string) – Application class output directory
- dx (float) – required particle spacing
- filename (string) – file containing the x, y coordinates of the geometry
- pb (float) – background pressure (default: 1.0)
- nu (float) – viscosity coefficient (default: 0.3/dx)
- k (float) – coefficient of repulsion (default: 0.005*dx)
- scale (float) – the scaling factor for the coordinates
- dfreq (int) – projection frequency of particles
- invert_normal (bool) – if True the computed normals are inverted
- hardpoints (dict) – the dictionary of hardpoints
- use_prediction (bool) – if True, points are projected quickly to reach prediction points
- filter_layers (bool) – if True, particles away from boundary are frozen
- reduce_dfreq (bool) – if True, reduce projection frequency
- tol (float) – tolerance value for convergence (default: 1e-2)
Returns: - xs (float) – x coordinate of solid particles
- ys (float) – y coordinate of solid particles
- zs (float) – z coordinate of solid particles
- xf (float) – x coordinate of fluid particles
- yf (float) – y coordinate of fluid particles
- zf (float) – z coordinate of fluid particles
-
pysph.tools.geometry.
get_packed_3d_particles_from_surface_file
(add_opt_func, folder, dx, filename, pb=None, nu=None, k=None, scale=1.0, shift=False, dfreq=-1, invert_normal=False, hardpoints=None, use_prediction=False, filter_layers=False, reduce_dfreq=False, tol=0.01)[source]¶ Creates a packed configuration of particles around the given STL file containing the x, y, z coordinates and normals.
Parameters: - add_opt_func (method) – options function from the parent Application class
- folder (string) – Application class output directory
- dx (float) – required particle spacing
- filename (string) – the STL filename
- pb (float) – background pressure (default: 1.0)
- nu (float) – viscosity coefficient (default: 0.3/dx)
- k (float) – coefficient of repulsion (default: 0.005*dx)
- scale (float) – the scaling factor for the coordinates
- dfreq (int) – projection frequency of particles
- invert_normal (bool) – if True the computed normals are inverted
- hardpoints (dict) – the dictionary of hardpoints
- use_prediction (bool) – if True, points are projected quickly to reach prediction points
- filter_layers (bool) – if True, particles away from boundary are frozen
- reduce_dfreq (bool) – if True, reduce projection frequency
- tol (float) – tolerance value for convergence (default: 1e-2)
Returns: - xs (float) – x coordinate of solid particles
- ys (float) – y coordinate of solid particles
- zs (float) – z coordinate of solid particles
- xf (float) – x coordinate of fluid particles
- yf (float) – y coordinate of fluid particles
- zf (float) – z coordinate of fluid particles
-
pysph.tools.geometry.
create_fluid_around_packing
(dx, xf, yf, L, B, zf=[0.0], H=0.0, **props)[source]¶ Create the outer fluid particles around the generated packing. It adds the packed fluid particles and generate a concatenated particle array
Parameters: - dx (float) – particle spacing
- xf (array) – x coordinate of fluid particles
- yf (array) – y coordinate of fluid particles
- L (float) – length of the domain
- B (float) – width of the domain
- zf (array) – z coordinate of fluid particles
- H (float) – height of the domain
Returns: Return type: Particle array of fluid
Solver Interfaces¶
Interfaces are a way to control, gather data and execute commands on a running solver instance. This can be useful for example to pause/continue the solver, get the iteration count, get/set the dt or final time or simply to monitor the running of the solver.
CommandManager¶
The CommandManager
class provides functionality to control the solver in
a restricted way so that adding multiple interfaces to the solver is possible
in a simple way.
The figure Overview of the Solver Interfaces shows an overview of the classes and objects involved in adding an interface to the solver.
The basic design of the controller is as follows:
Solver
has a methodset_command_handler()
takes a callable and a command_interval, and calls the callable with self as an argument every command_interval iterations- The method
CommandManager.execute_commands()
of CommandManager object is set as the command_handler for the solver. Now CommandManager can do any operation on the solver - Interfaces are added to the CommandManager by the
CommandManager.add_interface()
method, which takes a callable (Interface) as an argument and calls the callable in a separate thread with a newController
instance as an argument - A Controller instance is a proxy for the CommandManager which redirects
its methods to call
CommandManager.dispatch()
on the CommandManager, which is synchronized in the CommandManager class so that only one thread (Interface) can call it at a time. The CommandManager queues the commands and sends them to all procs in a parallel run and executes them when the solver calls itsexecute_commands()
method - Writing a new Interface is simply writing a function/method which calls
appropriate methods on the
Controller
instance passed to it.
Controller¶
The Controller
class is a convenience class which has various methods
which redirect to the Controller.dispatch()
method to do the actual
work of queuing the commands. This method is synchronized so that multiple
controllers can operate in a thread-safe manner. It also restricts the operations
which are possible on the solver through various interfaces. This enables adding
multiple interfaces to the solver convenient and safe. Each interface gets a
separate Controller instance so that the various interfaces are isolated.
Blocking and Non-Blocking mode¶
The Controller
object has a notion of Blocking and Non-Blocking mode.
- In Blocking mode operations wait until the command is actually executed on the
solver and then return the result. This means execution stops until the
execute_commands method of the
CommandManager
is executed by the solver, which is after everycommmand_interval
iterations. This mode is the default. - In Non-Blocking mode the Controller queues the command for execution and returns a task_id of the command. The result of the command can then be obtained anytime later by the get_result method of the Controller passing the task_id as argument. The get_result call blocks until result can be obtained.
Switching between modes
The blocking/non-blocking modes can be get/set using the methods
Controller.get_blocking()
and Controller.set_blocking()
methods
NOTE :
The blocking/non-blocking mode is not for getting/setting solver properties.
These methods always return immediately, even if the setter is actually executed
only when the CommandManager.execute_commands()
function is called by
the solver.
Interfaces¶
Interfaces are functions which are called in a separate thread and receive a
Controller
instance so that they can query the solver, get/set
various properties and execute commands on the solver in a safe manner.
Here’s the example of a simple interface which simply prints out the iteration count every second to monitor the solver
import time
def simple_interface(controller):
while True:
print(controller.get_count())
time.sleep(1)
You can use dir(controller)
to find out what methods are available on the
controller instance.
A few simple interfaces are implemented in the solver_interfaces
module, namely CommandlineInterface
, XMLRPCInterface
and MultiprocessingInterface
, and also in examples/controller_elliptical_drop_client.py.
You can check the code to see how to implement various kinds of interfaces.
Adding Interface to Solver¶
To add interfaces to a plain solver (not created using Application
),
the following steps need to be taken:
- Set
CommandManager
for the solver (it is not setup by default) - Add the interface to the CommandManager
The following code demonstrates how the the Simple Interface created above can be added to a solver:
# add CommandManager to solver
command_manager = CommandManager(solver)
solver.set_command_handler(command_manager.execute_commands)
# add the interface
command_manager.add_interface(simple_interface)
For code which uses Application
, you
simply need to add the interface to the application’s command_manager:
app = Application()
app.set_solver(s)
...
app.command_manager.add_interface(simple_interface)
Commandline Interface¶
The CommandLine
interface enables you to control the solver from
the commandline even as it is running. Here’s a sample session of the command-line
interface from the controller_elliptical_drop.py example:
$ python controller_elliptical_drop.py
pysph[0]>>>
Invalid command
Valid commands are:
p | pause
c | cont
g | get <name>
s | set <name> <value>
q | quit -- quit commandline interface (solver keeps running)
pysph[9]>>> g dt
1e-05
pysph[64]>>> g tf
0.1
pysph[114]>>> s tf 0.01
None
pysph[141]>>> g tf
0.01
pysph[159]>>> get_particle_array_names
['fluid']
The number inside the square brackets indicates the iteration count.
Note that not all operations can be performed using the command-line interface, notably those which use complex python objects.
XML-RPC Interface¶
The XMLRPCInterface
interface exports the controller object’s
methods over an XML-RPC interface. An example html file
controller_elliptical_drop_client.html uses this XML-RPC interface to
control the solver from a web page.
The following code snippet shows the use of XML-RPC interface, which is not much different from any other interface, as they all export the interface of the Controller object:
import xmlrpclib
# address is a tuple of hostname, port, ex. ('localhost',8900)
client = xmlrpclib.ServerProxy(address, allow_none=True)
# client has all the methods of the controller
print(client.system.listMethods())
print(client.get_t())
print(client.get('count'))
The XML-RPC interface also implements a simple http server which serves html, javascript and image files from the directory it is started from. This enables direct use of the file controller_elliptical_drop_client.html to get an html interface without the need of a dedicated http server.
The figure PySPH html client using XML-RPC interface shows a screenshot of the html client in action

PySPH html client using XML-RPC interface
One limitation of XML-RPC interface is that arbitrary python objects cannot be sent across. XML-RPC standard predefines a limited set of types which can be transferred.
Multiprocessing Interface¶
The MultiprocessingInterface
interface also exports the controller
object similar to the XML-RPC interface, but it is more featured, can use
authentication keys and can send arbitrary picklable objects. Usage of
Multiprocessing client is also similar to the XML-RPC client:
from pysph.solver.solver_interfaces import MultiprocessingClient
# address is a tuple of hostname, port, ex. ('localhost',8900)
# authkey is authentication key set on server, defaults to 'pysph'
client = MultiprocessingClient(address, authkey)
# controller proxy
controller = client.controller
pa_names = controller.get_particle_array_names()
# arbitrary python objects can be transferred (ParticleArray)
pa = controller.get_named_particle_array(pa_names[0])
Example¶
Here’s an example (straight from controller_elliptical_drop_client.py) put together to show how the controller can be used to create useful interfaces for the solver. The code below plots the particle positions as a scatter map with color-mapped velocities, and updates the plot every second while maintaining user interactivity:
from pysph.solver.solver_interfaces import MultiprocessingClient
client = MultiprocessingClient(address, authkey)
controller = client.controller
pa_name = controller.get_particle_array_names()[0]
pa = controller.get_named_particle_array(pa_name)
#plt.ion()
fig = plt.figure()
ax = fig.add_subplot(111)
line = ax.scatter(pa.x, pa.y, c=numpy.hypot(pa.u,pa.v))
global t
t = time.time()
def update():
global t
t2 = time.time()
dt = t2 - t
t = t2
print('count:', controller.get_count(), '\ttimer time:', dt,)
pa = controller.get_named_particle_array(pa_name)
line.set_offsets(zip(pa.x, pa.y))
line.set_array(numpy.hypot(pa.u,pa.v))
fig.canvas.draw()
print('\tresult & draw time:', time.time()-t)
return True
update()
gobject.timeout_add_seconds(1, update)
plt.show()