PHASTA

From CCI User Wiki
Jump to: navigation, search

This page serves as a gateway for information related to the two-phase level set code implemented in PHASTA.

Introductory Information

"Parallel Hierarchic Adaptive Stabilized Transient Analysis" or PHASTA software currently can model compressible or incompressible, laminar or turbulent, steady or unsteady flows in 3D, using unstructured grids.

Also refer to the SCOREC Research Wiki PHASTA pages.

Building the software

See PHASTA wiki on GitHub

If you are building/running the incompressible solver be sure to first load the libles module. On the BGQ and RSA systems the command is

 module load proprietary/libles

and on DRP it is

 module load libles

Problem Setup

The PHASTA problem definition and mesh generation GUI requires a graphical connection to the CCI. SSH X-Forwarding and the VNC are two ways to establish this connection; the VNC is preferred. Once connected to the VNC [link] , the GUI is launched to set solver and output controls [link] material properties, boundary and initial conditions [link] , set mesh generation controls [link] , generate the mesh, and generate the PHASTA input files [link] .

Connect to the CCI VNC

Follow these instructions.

Set Solver and Output Controls

See PhSolverSimulationControls.odt for a description of the commonly used controls for a two phase simulation.

Define Boundary/Initial Conditions

Boundary conditions are applied in the SimModeler problem definition and mesh generation GUI.

Essential BC's need to be user-set over faces and optionally over edges. If any of the thermodynamic quantities are set on an edge, then any thermodynamic quantity from the associated faces is not inherited. If any velocity component is set, then any of the velocity components from the faces is not inherited. Everything as above for vertex w.r.t. associated edges.

comp1: group of magnitude and direction, when one velocity is set

comp3: group of magnitude and direction, when three velocities are set

comp1, comp3 are not exactly attributes but velocity attribute groups. Both contain a vector direction and magnitude. Both cant be set at the same time. comp1 implies velocity is set only in one direction and is free in the other directions. comp3 means velocities are set in all the three directions and its magnitude and direction are those of the resultant.

The natural pressure boundary condition does not enforce an exact value for the pressure at all the nodes located at the outflow face. It actually weekly prescribe the pressure magnitude over the plane. If the flow field is laminar and velocity field is smooth, the pressure distribution is almost uniform. If the flow field is turbulent and velocity field has strong fluctuations, the pressure field also shows some fluctuations.

Select the model, Selection->Select Model, to make the initial velocity and initial pressure attributes active.

The initial velocity attribute specifies the initial velocity field over the entire domain via a velocity vector u_x, u_y, u_z in units of m/s.

The initial pressure attribute specifies the initial pressure field over the entire domain in units of Pascals.

Set Mesh Generation Controls

Follow these instructions to generate a mesh using a script.

PreProcessing

See Chef

Flow Solver Execution

The following instructions are for running the flow solver only. To run the automated adaptive loop, which iterates execution between the flow solver and mesh adaptation, proceed to Automated Adaptive Execution. Alternatively, the Running Through the Portal may be used to run certain simulations.

The flow solver requires that the following directory structure and files exist. $PWD is the directory where the phSolver executable is located.

$PWD/<phSolver-executable>
$PWD/input.config
$PWD/solver.inp
$PWD/<NUM_PROCESSES>-procs_case
  • input.config contains ALL the phSolver input options. All options here should be set to their default values.
  • solver.inp contains a subset of the phSolver input options. These options should be set to non-default options appropriate for the simulation. See PhSolverSimulationControls.odt for a description of the commonly used controls for a two phase simulation.
  • <NUM_PROCESSES>-procs_case directory contains the PHASTA format mesh, geombc.dat.*, the PHASTA format field data restart.<time-step-number>.* and numstart.dat, contains a single integer that defines the timestep number.

Job Submission

The CCI systems are scheduled with SLURM. Instructions for creating job submission scripts can be found on the SLURM Wiki page .

The general command for running PHASTA's incompressible phastaIC.exe or compressible <coode>phastaC.exe</code> flow solver is:

 mpirun -np NUM_PROCESSES phasta[C|IC].exe

Depending on the CCI system being used mpirun may be replaced with srun and/or require additional arguments.

Output

For the flow solve the following is output from phSolver for each timestep:

time step timestep size elapsed physical time elapsed wall time residual 'normalized' residual max(delta U/U) max(delta p/P) <NOT RELEVANT> CG - GMRES iterations MAX CFL mesh region id with max CFL
5 1.000E-10 5.000E-10 1.629E+02 2.303E-05 ( 50) 1.219E-01 1.346E+00 < 184- 902| 16> [ 16 - 10] 3.397E+00 101

Columns seven and eight, max(delta U/U) and max(delta p/P), are listing the max percentage change of the solution in this iteration. If they are large then you are far from convergence of a steady solution or perhaps taking a large time step.

For the convection of the interface and distance field by flow the following is output from phSolver for each timestep:

time step timestep size elapsed physical time elapsed wall time residual max(delta phi/phi) GMRES iterations MAX CFL mesh region id with max CFL
5 1.000E-10 5.000E-10 1.630E+02 4.932E-09 1.021E-05 [ 10] 4.589E-02 436


For redistancing the following is output from phSolver at each timestep for each redistancing iteration:

time step pseudo timestep size elapsed physical time elapsed wall time residual max(delta phi/phi) GMRES iterations MAX pseudo CFL mesh region id with max pseudo CFL
5 1.317E-08 1.357E-08 1.635E+02 1.477E-19 1.697E-06 [ 10] 3.996E-01 101

where phi is the signed distance field.

Automated Adaptive Execution

See PhastaChef

PHASTA Web-Portal

The PHASTA Web-Portal, https://phasta.scigap.org/, supports execution of PHASTA on the CCI BlueGene/Q through a web-based interface. If you are interested in using the portal please contact EmailCWS.png .

PostProcessing

Note that the following instructions assume that the Paraview module with customizations for PHASTA has been loaded.

Running ParaView on the CCI

Using Paraview in Serial with multiple parts

Create phasta.pht with the following contents:

<?xml version="1.0" ?>
<PhastaMetaFile number_of_pieces="128">
 <GeometryFileNamePattern pattern="128-procs_case/geombc.dat.%d"
                          has_piece_entry="1"
                          has_time_entry="0"/>
 <FieldFileNamePattern pattern="128-procs_case/%d/restart.%d.%d"
                       has_piece_entry="1"
                       has_time_entry="1"/>
 <TimeSteps number_of_steps="30"
            auto_generate_indices="1"
            start_index="10"
            increment_index_by="10"
            start_value="0."
            increment_value_by="0.001">
 </TimeSteps>
 <Fields number_of_fields="4">
   <Field paraview_field_tag="velocity"
          phasta_field_tag="solution"
          start_index_in_phasta_array="1"
          number_of_components="3"
          data_dependency="0"
          data_type="double"/>
   <Field paraview_field_tag="pressure"
          phasta_field_tag="solution"
          start_index_in_phasta_array="0"
          number_of_components="1"
          data_dependency="0"
          data_type="double"/>
  <Field paraview_field_tag="scr1"
          phasta_field_tag="solution"
          start_index_in_phasta_array="5"
          number_of_components="1"
          data_dependency="0"
          data_type="double"/>
  <Field paraview_field_tag="scr2"
          phasta_field_tag="solution"
          start_index_in_phasta_array="6"
          number_of_components="1"
          data_dependency="0"
          data_type="double"/>
 </Fields>
</PhastaMetaFile>

The following fields might need editing in this file:

  • number_of_pieces --> should be set to the total number of partitions/parts to load (the number of cores used for the calculation by PHASTA in this case 128)
  • number_of_steps --> total number of time-steps to load
  • start_index <value> --> starting time-step (looks for restart.value.<partitionNum> files)
  • increment_index_by --> interval between consecutive time-steps to load

Launch ParaView and load phasta.pht file from file menu.


Using Paraview in parallel with multiple parts

Load the following phasta.pht, making the edits discussed above, file into paraview:

 <?xml version="1.0" ?>
 <PhastaMetaFile number_of_pieces="2048">
   <GeometryFileNamePattern pattern="2048-procs_case/geombc.dat.%d"
                            has_piece_entry="1"
                            has_time_entry="0"/>
   <FieldFileNamePattern pattern="2048-procs_case/%d/restart.%d.%d"
                         has_piece_entry="1"
                         has_time_entry="1"/>
   <TimeSteps number_of_steps="73"
              auto_generate_indices="1"
              start_index="0"
              increment_index_by="20"
              start_value="0.0."
              increment_value_by="1e-7">
   </TimeSteps>
   <Fields number_of_fields="4">
     <Field paraview_field_tag="velocity"
            phasta_field_tag="solution"
            start_index_in_phasta_array="1"
            number_of_components="3"
            data_dependency="0"
            data_type="double"/>
     <Field paraview_field_tag="pressure"
            phasta_field_tag="solution"
            start_index_in_phasta_array="0"
            number_of_components="1"
            data_dependency="0"
            data_type="double"/>
    <Field paraview_field_tag="scr1"
            phasta_field_tag="solution"
            start_index_in_phasta_array="5"
            number_of_components="1"
            data_dependency="0"
            data_type="double"/>
    <Field paraview_field_tag="scr2"
            phasta_field_tag="solution"
            start_index_in_phasta_array="6"
            number_of_components="1"
            data_dependency="0"
            data_type="double"/>
   </Fields>
 </PhastaMetaFile>

Papers and Publications

PHASTA has enabled the following publications.

    2014

  • Rasquin, Michel, et al.   "Scalable implicit flow solver for realistic wingsimulations with flow control."   Computing in Science & Engineering 16.6 (2014): 13-21.
  • Rasquin, Michel, et al.  "Scalable fully implicit finite element flow solver with application to high-fidelity flow control simulations on a realistic wing design."   Computing in Science and Engineering 16.6 (2014): 13-21.

  • 2013

  • Rodriguez, Joseph M., et al.   "A parallel adaptive mesh method for the numerical simulation of multiphase flows."   Computers & Fluids 87 (2013): 115-131.

  • 2010

  • Bolotnov, Igor A., et al.   "Interaction of computational tools for multiscale multiphysics simulation of generation-iv reactors."   International Congress on Advances in Nuclear Power Plants 2010, ICAPP 2010. 2010.</em>
  • Behafarid, F., et al.  "Two phase cross jet in a fuel rod assembly using DNS/Level-Set method."   7th International Conference on Multiphase Flow (ICMF-2010). 2010.
  • Bolotnov, I. A., et al.  "Multiscale computer simulation of fission gas discharge during loss-of-flow accident in sodium fast reactor."   Proc. Computational Fluid Dynamics for Nuclear Reactor Safety (CFD4NRS-3), Washington, DC, USA (2010).
  • Liu, Ning, et al.  "Massively parallel I/O for partitioned solver systems."   Parallel Processing Letters 20.04 (2010): 377-395.
  • Fu, Jing, et al.  "Scalable parallel I/O alternatives for massively parallel partitioned solver systems."   Parallel & Distributed Processing, Workshops and Phd Forum (IPDPSW), 2010 IEEE International Symposium on. IEEE, 2010.

  • 2009

  • Devine, K., et al.  "Interoperable mesh components for large-scale, distributed-memory simulations."   Journal of Physics: Conference Series. Vol. 180. No. 1. IOP Publishing, 2009.
  • Sahni, Onkar, et al.  "Strong scaling analysis of a parallel, unstructured, implicit solver and the influence of the operating system interference."   Scientific Programming 17.3 (2009): 261-274.

  • 2007

  • Shephard, M. S., et al.  "Parallel adaptive simulations on unstructured meshes."   Journal of physics: conference series. Vol. 78. No. 1. IOP Publishing, 2007.

  • 2005

  • Sunitha Nagrath, et al.  "Computation of incompressible bubble dynamics with a stabilized finite element level set method."   Computer Methods in Applied Mechanics and Engineering. Volume 194, Issues 42–44, 15 October 2005, Pages 4565-4587, ISSN 0045-7825, http://dx.doi.org/10.1016/j.cma.2004.11.012. 2005.

  • 2001

  • Whiting, Christian H., and Kenneth E. Jansen.  "A stabilized finite element method for the incompressible Navier-Stokes equations using a hierarchical basis."   International Journal for Numerical Methods in Fluids 35.1 (2001): 93-116.

  •    </ul>