Welcome to the forums. Please post in English or French.

You are not logged in.

#1 Re: Code_Aster usage » Transient mechanical analysis under an evolutionary temperature field » 2019-06-22 12:32:46

Hi,
maybe you can find some answers here https hmm /youtu.be/1ZNsgykHnGE
1- Did you do transient thermal correctly?
2- Did you use PROJ_CHAMP to project thermal model on mechanical? Make sure to check the box PROJECTION =YES. Also try using DYNA_NON_LINE and play around a bit with its initial conditions to see if you can include thermal effects. The challenge would be to apply evolving temperature field on the deforming mesh nodes under dynamic loading.

Anirudh

#2 Re: Code_Aster usage » Homard missing in testcase zzzz121a run » 2019-06-10 11:38:04

Hi,
here is my profile.sh located in $ASTER_ROOT/PAR14.2/share/aster/

# created by waf using data/wscript


LD_LIBRARY_PATH=.:$ASTER_VERSION_DIR/lib:\
/home/anirudh/code_aster/public/homard-11.10/Linux64:\
/home/anirudh/code_aster/public/homard-11.10:\
/usr/lib/x86_64-linux-gnu/openmpi/lib:\
/home/anirudh/code_aster/public/metis-5.1.0/lib:\
/home/anirudh/code_aster/public/tfel-3.1.1/lib:\
/home/anirudh/code_aster/mumps-5.1.2_mpi/lib:\
/home/anirudh/code_aster/public/scotch-6.0.4/lib:\
/home/anirudh/code_aster/public/med-3.3.1/lib:\
/home/anirudh/code_aster/public/hdf5-1.8.14/lib:\
/home/anirudh/dev/aster-prerequisites/petsc-3.7.7/arch-linux2-c-opt/lib:\
/home/anirudh/code_aster/OpenBLAS/lib:\
/home/anirudh/code_aster/scalapack/lib:\
/usr/lib/lapack:\
/usr/lib/libblas:\
/home/anirudh/code_aster/PAR14.2/lib:\
/home/anirudh/code_aster/PAR14.2/lib/aster:\
$LD_LIBRARY_PATH
export LD_LIBRARY_PATH

PYTHONPATH=\
.:/home/anirudh/code_aster/PAR14.2/lib/aster:\
/home/anirudh/code_aster/public/tfel-3.1.1/lib/python2.7/site-packages:\
$PYTHONPATH
export PYTHONPATH

# do not change PYTHONHOME under the SALOME environment
if [ -z "${ABSOLUTE_APPLI_PATH}" ]; then
    PYTHONHOME=/usr
    export PYTHONHOME
fi

# as PYTHONHOME is changed, path to 'python' must preceed all others if a
# subprocess calls it
PATH=/home/anirudh/salome_meca/V2018.0.1_public/tools/Homard_aster-1110_aster/Linux64:\
/home/anirudh/code_aster/public/homard-11.10/Linux64:\
/usr/bin:\
/home/anirudh/code_aster/public/homard-11.10/ASTER_HOMARD/homard:\
$PATH
export PATH

ASTER_LIBDIR=/home/anirudh/code_aster/PAR14.2/lib/aster
export ASTER_LIBDIR

ASTER_DATADIR=/home/anirudh/code_aster/PAR14.2/share/aster
export ASTER_DATADIR

ASTER_LOCALEDIR=/home/anirudh/code_aster/PAR14.2/share/locale/aster
export ASTER_LOCALEDIR

ASTER_ELEMENTSDIR=/home/anirudh/code_aster/PAR14.2/lib/aster
export ASTER_ELEMENTSDIR

# MFront specific
export TFELHOME=/home/anirudh/code_aster/public/tfel-3.1.1
export PATH=${TFELHOME}/bin:$PATH

I think I am specifying path to homard incorrectly.


Thanks
Anirudh

#3 Code_Aster usage » Homard missing in testcase zzzz121a run » 2019-06-09 12:43:23

Anirudh
Replies: 1

Hello,
I try run the testcase zzzz121a with Code_Aster 14.2 and Parallel version of 14.2 from ASTK. It involves mesh refinement using homard.
However, I get the following error with both sequential and parallel version.

For the parallel version:
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
   ! <S> Exception utilisateur levee mais pas interceptee. !
   ! Les bases sont fermees.                               !
   ! Type de l'exception : error                           !
   !                                                       !
   ! Le fichier homard est inconnu.                        !
   !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!



and for the sequential version 14.2, I get the following error:



###############################################
           Client name : anirudh.domain.org
              Username : anirudh
-----------------------------------------------
           Server name : localhost
              Username : anirudh
                  Node : anirudh
              Platform : LINUX64
-----------------------------------------------
    Code_Aster version : /home/anirudh/code_aster/14.2/share/aster
-----------------------------------------------
            Time (min) : 2
           Memory (MB) : 762.0
  Number of processors : 1   (OpenMP)
       Number of nodes : 1   (MPI)
  Number of processors : 1   (MPI)
                  Mode : interactif
-----------------------------------------------
            Debug mode : nodebug
-----------------------------------------------
            BTC script : generated
-----------------------------------------------
   Version ASTK Server : 2018.0
        Version Client : ASTK 2018.0.final
###############################################

<A>_ALARM          14.2 is already known as /home/anirudh/code_aster/14.2/share/aster (/opt/aster/14.2/share/aster is ignored). Check your configuration file : /home/anirudh/code_aster/etc/codeaster/aster


<A>_ALARM          14.2 is already known as /home/anirudh/code_aster/14.2/share/aster (/opt/aster/14.2/share/aster is ignored). Check your configuration file : /home/anirudh/code_aster/etc/codeaster/aster


<A>_ALARM          14.2 is already known as /home/anirudh/code_aster/14.2/share/aster (/opt/aster/14.2/share/aster is ignored). Check your configuration file : /home/anirudh/code_aster/etc/codeaster/aster


--------------------------------------------------------------------------------
Code_Aster execution

<INFO> prepare environment in /tmp/anirudh-anirudh-interactif_7253-anirudh

--------------------------------------------------------------------------------
Copying datas

copying .../tests/zzzz121a.comm...                                      [  OK  ]
copying .../tests/zzzz121a.mmed...                                      [  OK  ]
<INFO> Parameters : memory 762 MB - time limit 60 s

--------------------------------------------------------------------------------
Content of /tmp/anirudh-anirudh-interactif_7253-anirudh before execution

total 88
drwx------  3 anirudh anirudh  4096 Jun  9 17:06 .
drwxrwxrwt 25 root    root    12288 Jun  9 17:06 ..
-rw-r--r--  1 anirudh anirudh  1057 Jun  9 17:06 7253-anirudh.export
-rw-r--r--  1 anirudh anirudh  2785 Jun  9 17:06 config.txt
-rw-r--r--  1 anirudh anirudh  9760 Jun  9 17:06 fort.1.1
-rw-r--r--  1 anirudh anirudh 46536 Jun  9 17:06 fort.20
drwxr-xr-x  2 anirudh anirudh  4096 Jun  9 17:06 REPE_OUT


--------------------------------------------------------------------------------
Code_Aster run

<INFO> Command line 1 :
<INFO> /home/anirudh/code_aster/14.2/bin/aster /home/anirudh/code_aster/14.2/lib/aster/Execution/E_SUPERV.py -commandes fort.1  --num_job=7253-anirudh --mode=interactif --rep_outils=/home/anirudh/code_aster/outils --rep_mat=/home/anirudh/code_aster/14.2/share/aster/materiau --rep_dex=/home/anirudh/code_aster/14.2/share/aster/datg --numthreads=1 --suivi_batch --memjeveux=95.25 --tpmax=60.0
Segmentation fault (core dumped)
EXECUTION_CODE_ASTER_EXIT_7253-anirudh=139
<INFO> Code_Aster run ended, diagnostic : <F>_ABNORMAL_ABORT

--------------------------------------------------------------------------------
Content of /tmp/anirudh-anirudh-interactif_7253-anirudh after execution

.:
total 15188
drwx------  3 anirudh anirudh     4096 Jun  9 17:06 .
drwxrwxrwt 25 root    root       12288 Jun  9 17:06 ..
-rw-r--r--  1 anirudh anirudh     1057 Jun  9 17:06 7253-anirudh.export
-rw-r--r--  1 anirudh anirudh     2785 Jun  9 17:06 config.txt
-rw-------  1 anirudh anirudh 50348032 Jun  9 17:06 core
-rw-r--r--  1 anirudh anirudh     9760 Jun  9 17:06 fort.1
-rw-r--r--  1 anirudh anirudh     9760 Jun  9 17:06 fort.1.1
-rw-r--r--  1 anirudh anirudh    46536 Jun  9 17:06 fort.20
-rw-r--r--  1 anirudh anirudh       64 Jun  9 17:06 fort.6
drwxr-xr-x  2 anirudh anirudh     4096 Jun  9 17:06 REPE_OUT

REPE_OUT:
total 8
drwxr-xr-x 2 anirudh anirudh 4096 Jun  9 17:06 .
drwx------ 3 anirudh anirudh 4096 Jun  9 17:06 ..


--------------------------------------------------------------------------------
Size of bases


--------------------------------------------------------------------------------
Copying results


<F>_ABNORMAL_ABORT Code_Aster run ended



---------------------------------------------------------------------------------
                                            cpu     system    cpu+sys    elapsed
---------------------------------------------------------------------------------
   Preparation of environment              0.00       0.00       0.00       0.00
   Copying datas                           0.03       0.01       0.04       0.20
   Code_Aster run                          0.12       0.10       0.22       1.22
   Copying results                         0.00       0.00       0.00       0.00
---------------------------------------------------------------------------------
   Total                                   0.19       0.12       0.31       1.72
---------------------------------------------------------------------------------

as_run 2018.0

------------------------------------------------------------
--- DIAGNOSTIC JOB : <F>_ABNORMAL_ABORT
------------------------------------------------------------


EXIT_CODE=4

Please could someone take a look.
Attaching full log for parallel version.

Best Regards
Anirudh Nehra

#4 Re: Salome-Meca usage » MESHING strategy for a 3D body » 2019-05-22 13:08:20

Hi Nicolas
I want to vary the distribution of nodes along the edge as a parabolic function. So, the nodes are sparse at the centre but get more dense as one goes towards both ends of the edge. I tried with AP, GP progression hypothesis but they allows only one directional increase, but for my case it needs to be increasing and decreasing. Something like the pattern below.
|*
|*
|
|*
|
|
|*
|
|
|*
|
|*
|*

I need it as a submesh of a hexahedral (I,j,k) mesh height edge.
I cannot use body fitting algorithm because body fitting does not respect edges of partition. Also, it does not allow to use max size 1d parameter.

Thanks a lot.
Anirudh

#5 Code_Aster installation » /bin/sh: 1: -np: not found V14.2 parallel install » 2019-05-21 12:02:09

Anirudh
Replies: 0

Dear all,
I have recently installed debian 9 and I want to compile code aster 14.2 version on the system. Following the recommendation given here:https--/ /sites.google.com/site/codeastersalomemeca/home/code_asterno-heiretuka/parallel-code_aster-12-6

I have been partly successful moving past MUMPS_MPI install. I also installed latest openMPI 4.0 in home folder, removed the preinstalled 2.02 version and also updated $PATH and $LD_LIBRARY_PATH to the best of my knowledge.
Now, when installing scalapack using scalapack installer given above, I get this error.

make[1]: Leaving directory '/home/anirudh/dev/aster-prerequisites/scalapack_installer/build/scalapack-2.0.2/TESTING/EIG'

   *************************************************  
   ***                   OUTPUT BLACS TESTING xCbtest                   ***  
   *************************************************  
/bin/sh: 1: -np: not found

Somewhere it should mean that my mpi is not working correctly. Also, If I run :

mpirun -np 4 /path/to/a/sample/helloworld_executable 

The output says Hello world in 4 lines which it should. Now I am unable to locate errors. Can someone provide an insight as to what is wrong.
Also, when I try to compile PETSC by changing directory to /petsc-src and issuing command:

./config/configure.py --with-mpi-dir=/home/anirudh/openmpi --with-blas-lapack-lib=/home/anirudh/code_aster/OpenBLAS/lib/libopenblas.a --with-scalapack-dir=/home/anirudh/code_aster/scalapack --download-hypre=yes --download-ml=yes --with-debugging=0 COPTFLAGS=-O1 CXXOPTFLAGS=-O1 FOPTFLAGS=-O1 --configModules=PETSc.Configure --optionsModule=PETSc.compilerOptions --with-x=0 --with-shared-libraries=0 --with-mumps=yes --download-mumps=yes

the first stage is successful when I get to:
Configure stage complete. Now build PETSc libraries with (gnumake build):
   make PETSC_DIR=/home/anirudh/dev/aster-prerequisites/petsc_aster/petsc-src PETSC_ARCH=arch-linux2-c-opt all
But when I issue the above command it fails with the following error at last:

/home/anirudh/dev/aster-prerequisites/petsc_aster/petsc-src/src/sys/objects/init.c: In function ‘PetscOptionsCheckInitial_Private’:
/home/anirudh/dev/aster-prerequisites/petsc_aster/petsc-src/src/sys/objects/init.c:420:40: error: ‘MPI_Handler_function’ undeclared (first use in this function)
     ierr = MPI_Comm_create_errhandler((MPI_Handler_function*)Petsc_MPI_DebuggerOnError,&err_handler);CHKERRQ(ierr);
                                        ^~~~~~~~~~~~~~~~~~~~
/home/anirudh/dev/aster-prerequisites/petsc_aster/petsc-src/src/sys/objects/init.c:420:40: note: each undeclared identifier is reported only once for each function it appears in
/home/anirudh/dev/aster-prerequisites/petsc_aster/petsc-src/src/sys/objects/init.c:420:61: error: expected expression before ‘)’ token
     ierr = MPI_Comm_create_errhandler((MPI_Handler_function*)Petsc_MPI_DebuggerOnError,&err_handler);CHKERRQ(ierr);
                                                             ^
/home/anirudh/dev/aster-prerequisites/petsc_aster/petsc-src/src/sys/objects/init.c:420:12: error: too few arguments to function ‘MPI_Comm_create_errhandler’
     ierr = MPI_Comm_create_errhandler((MPI_Handler_function*)Petsc_MPI_DebuggerOnError,&err_handler);CHKERRQ(ierr);
            ^~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /home/anirudh/dev/aster-prerequisites/petsc_aster/petsc-src/include/petscsys.h:130:0,
                 from /home/anirudh/dev/aster-prerequisites/petsc_aster/petsc-src/src/sys/objects/init.c:9:
/home/anirudh/openmpi/include/mpi.h:1332:20: note: declared here
 OMPI_DECLSPEC  int MPI_Comm_create_errhandler(MPI_Comm_errhandler_function *function,
                    ^~~~~~~~~~~~~~~~~~~~~~~~~~
/home/anirudh/dev/aster-prerequisites/petsc_aster/petsc-src/src/sys/objects/init.c:470:63: error: expected expression before ‘)’ token
       ierr = MPI_Comm_create_errhandler((MPI_Handler_function*)Petsc_MPI_AbortOnError,&err_handler);CHKERRQ(ierr);
                                                               ^
/home/anirudh/dev/aster-prerequisites/petsc_aster/petsc-src/src/sys/objects/init.c:470:14: error: too few arguments to function ‘MPI_Comm_create_errhandler’
       ierr = MPI_Comm_create_errhandler((MPI_Handler_function*)Petsc_MPI_AbortOnError,&err_handler);CHKERRQ(ierr);
              ^~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /home/anirudh/dev/aster-prerequisites/petsc_aster/petsc-src/include/petscsys.h:130:0,
                 from /home/anirudh/dev/aster-prerequisites/petsc_aster/petsc-src/src/sys/objects/init.c:9:
/home/anirudh/openmpi/include/mpi.h:1332:20: note: declared here
 OMPI_DECLSPEC  int MPI_Comm_create_errhandler(MPI_Comm_errhandler_function *function,
                    ^~~~~~~~~~~~~~~~~~~~~~~~~~
gmakefile:150: recipe for target 'arch-linux2-c-opt/obj/sys/objects/init.o' failed
make[2]: *** [arch-linux2-c-opt/obj/sys/objects/init.o] Error 1
make[2]: *** Waiting for unfinished jobs....
          CC arch-linux2-c-opt/obj/sys/objects/pinit.o
          FC arch-linux2-c-opt/obj/sys/f90-src/fsrc/f90_fwrap.o
Use "/usr/bin/make V=1" to see verbose compile lines, "/usr/bin/make V=0" to suppress.
          FC arch-linux2-c-opt/obj/sys/f90-mod/petscsysmod.o
make[2]: Leaving directory '/home/anirudh/dev/aster-prerequisites/petsc_aster/petsc-src'
/home/anirudh/dev/aster-prerequisites/petsc_aster/petsc-src/lib/petsc/conf/rules:81: recipe for target 'gnumake' failed
make[1]: *** [gnumake] Error 2
make[1]: Leaving directory '/home/anirudh/dev/aster-prerequisites/petsc_aster/petsc-src'
**************************ERROR*************************************
  Error during compile, check arch-linux2-c-opt/lib/petsc/conf/make.log
  Send it and arch-linux2-c-opt/lib/petsc/conf/configure.log to petsc-maint@mcs.anl.gov
********************************************************************
makefile:30: recipe for target 'all' failed
make: *** [all] Error 1

It seems there is some error on part with MPI_INIT function. I am unable to spot error here too. I am attaching the terminal output for both files. (command for scalapack install at line 1 and for PETSc at line 69)
MPI version is really important to me as I have some projects with contact friction to undertake. Please can someone take a look.

Edit: I removed libopempi-dev, libopenmpi-common , openmpi-bin, libopempi2 etc using synaptic package manager. These were preinstalled with debian but there version was different(2.0.2-2) where as my openMPI version is 4.0.1. Sadly I did not find the above libs for version 4.0.1 anywhere online or debian repositories.
Also, with default OpenMPI installation with above files(version 2.0.2-2), the PETSC and scalapack installation were ok but Code_Aster compilation failed after showing 100% for some reason.
If there is any repository I am missing, please suggest.

Merci
Anirudh

#6 Re: Code_Aster development » MPI for Homard » 2019-03-05 16:55:17

Hi,
I read somewhere in the documentation that only sequential version of CA support Homard. I want to know if I try using Homard using parallel version(ASTK ) , will it work or will I get a fatal error?

Thanks
Anirudh

#7 Re: Code_Aster usage » DEFI_LIST_INST or LIST_REEL for NL analysis & convergence issues » 2019-02-09 13:42:25

Hello,
LIST_INST incorporates time step refinement algorithm based on a number of criteria, such as the residual in RESI_GLOB_RELA. LIST_INST takes LIST_REEL as input.
If you use REEL in STAT_NON_LINE, only the given time steps will be used for calculation. The code stops if it cannot converge on REEL. On the other hand using INST, the code tries to divide time step as per the settings and perform recalculation.
You can also manually provide time steps of computation through LIST_REEL.
You can try running some testcases as well as go through related documentation and work by others .

Anirudh

#8 Code_Aster development » MPI for Homard » 2019-02-05 01:56:49

Anirudh
Replies: 3

Hello,
I am interested in developing parallel mesh refinement using homard tool because MACR_ADAP_MAIL only works with sequential version of CA. Could you please help me chart a course/roadmap including the key points like what text to refer, what commands to modify, how long would it take generally, etc.

Thanks
Anirudh

#9 Re: Code_Aster usage » Code stops due to communication error » 2019-02-02 04:49:47

Hi,
Thanks for the reply.
I tried again and it worked for a smaller time band(till 0.2s). It did not converge beyond 0.2s. However, the memory usage while CALC_CHAMP(for VonMises Stress) is still huge. For 36 time steps, it was 23 GB(13GB free RAM(total 16 GB) +11 GB Swap) in all. Is there no way to avoid this? I see messages in Xterm that lots of fields are stored in RAM for a converged step(like SIEF_ELGA, CONT_NOEU, COMPORTEMENT, VARI_ELGA, DEPL, ACCEL, VITE) etc. Is it possible to just store DEPL while calculating?
Also, you mentioned about second level parallelism in Code_Aster. Do I activate it by ncpus parameter in ASTK? I thought OpenMP is valid only for 1 core and MULT_FRONT. If yes, will it increase RAM requirement?

Thanks a lot.
Regards
Anirudh Nehra

#10 Re: Code_Aster usage » Code stops due to communication error » 2019-01-30 12:33:23

Hello,
Can someone please help me get rid of this error. I think while calculating fields, large amounts of data need to be written. During this operation, processors may be idle and do not communicate. Also, how do I change 120s limit which I think is the delay after which MPI_ABORT is invoked.

Regards
Anirudh

#11 Code_Aster usage » Code stops due to communication error » 2019-01-28 15:50:22

Anirudh
Replies: 4

Hello a tous,
I did a simulation and it converged well, but while post processing using CALC_CHAMP, the code stopped due to following error:

  
   !-----------------------------------------------------------------------------------!
   ! <A> <APPELMPI_94>                                                                 !
   !                                                                                   !
   !     Il n'y a plus de temps pour continuer.                                        !
   !     Le calcul sera interrompu à la fin de la prochaine itération, du prochain     !
   !     pas de temps ou de la prochaine commande, ou bien brutalement par le système. !
   !                                                                                   !
   !     On accorde un délai de 120 secondes pour la prochaine communication.          !
   !                                                                                   !
   !     Conseil :                                                                     !
   !         Augmentez la limite en temps du calcul.                                   !
   !                                                                                   !
   !                                                                                   !
   ! This is a warning. If you do not understand the meaning of this                   !
   !  warning, you can obtain unexpected results!                                      !
   !-----------------------------------------------------------------------------------!
   
   
   !-------------------------------------------------------!
   ! <EXCEPTION> <APPELMPI_96>                             !
   !                                                       !
   !  Processor # 0 did not answer within the time limit.  !
   !   Communication of the type  MPI_ISEND cancelled.     !
   !-------------------------------------------------------!
   
   
   !-----------------------------------------------------------------------!
   ! <F> <APPELMPI_99>                                                     !
   !                                                                       !
   !  At least a processor is not able to take part in the communication.  !
   !  The execution is thus stopped.                                       !
   !                                                                       !
   !                                                                       !
   ! This error is fatal. The code stops.                                  !
   !-----------------------------------------------------------------------!
   

I am pretty sure atleast 45 minutes were available for calculation.
The bash window was not getting updated and I closed it. Now, I want to retrieve results using POURSUITE but cannot find glob.1
Could someone help me understand why I got this error and where can I find glob.1?( The glob.1 in base folder has very less time steps compared to whole simulation(16 as against 158)). Attaching the message file.

Thanks a lot in advance
Anirudh

#12 Re: Code_Aster usage » 3D_SI for hexa elements not working » 2019-01-28 12:02:31

Hi,
Attaching a testrun where 3D_SI worked well with GROT_GDEP and Tetra4.
I also get this message:
   ! Attention l'élément HEXA8 en 3D_SI ne fonctionne correctement que sur les parallélépipèdes. !
   ! Sur les éléments quelconques on peut obtenir des résultats faux.                            !
   !                                                                                             !
   !                                                                                             !
   ! This is a warning. If you do not understand the meaning of this                             !
   !  warning, you can obtain unexpected results!                     


Thanks
Anirudh

#13 Re: Code_Aster usage » 3D_SI for hexa elements not working » 2019-01-28 11:16:20

Please see attached message file.

I have another inquiry regarding convergence.
Sometimes the solver cuts time step before reaching the end of final iteration number with a message:

Error in the integration of the law of behavior.

What does the above message mean?Where can I find more information?
Please help.

Thanks
Anirudh

#14 Re: Code_Aster usage » 3D_SI for hexa elements not working » 2019-01-28 11:14:40

Hello,
Thanks for the reply.
I have an issue with contact. I want to know whether the 'continue' method of contact allows penetration? I thought it was related to Augmented lagrange method and guaranteed no penetration.
I am getting the following error:

!----------------------------------------------------------------------------------------------------------------------------------!
   ! <A> <CONTACT_22>                                                                                                                 !
   !                                                                                                                                  !
   ! Contact méthode continue.                                                                                                        !
   !   La méthode ALGO_CONT='PENALISATION  autorise une pénétration des zones de contact. On estime cette pénétration à 1.105366 pour !
   ! cent par rapport à la plus petite maille du maillage.                                                                            !
   !   COEF_PENA_CONT doit être suffisamment grand pour empêcher une trop grande pénétration. Il faut donc augmenter la valeur de     !
   ! COEF_PENA_CONT (et de COEF_PENA_FROT )                                                                                           !
   !   de sorte à inférieur à cinq pour cent de pénétration.                                                                          !
   !   Conseils :                                                                                                                     !
   !   -------                                                                                                                        !
   !   Il n'y a pas de méthode de référence pour le choix des coefficients de pénalisation.                                           !
   !   - une estimation empirique serait de multiplier COEF_PENA_CONT par (1+1.105366)/100*module d'Young du corps le plus dur        !
   !   - Il n'y a pas d'estimation empirique pour COEF_PENA_FROT.                                                                     !
   !                                                                                                                                  !
   !                                                                                                                                  !
   ! This is a warning. If you do not understand the meaning of this                                                                  !
   !  warning, you can obtain unexpected results!                                                                                     !
   !----------------------------------------------------------------------------------------------------------------------------------!

#15 Re: Code_Aster usage » 3D_SI for hexa elements not working » 2019-01-27 17:35:32

Hello,
I tried incorporating above into command file.I get this alarm first:

<A> <CALCULEL6_77>                                                                         !
   !                                                                                            !
   ! Problem during the creation of the field by elements ( SIGMINI0).                          !
   !  This field is associated with the parameter PSIEF_R of the option:  TOU_INI_ELGA          !
   !  Certain values provided by the user were not recopied in the final field.                 !
   !                                                                                            !
   !  The problem has 2 possible causes:                                                        !
   !  * The assignment is made in a too broad" way", for example by using key word TOUT='OUI'.  !
   !  * Certain elements do not support the required assignment.                                !
   !                                                                                            !
   ! Risks and advices:                                                                         !
   !  If the problem occurs in command CREA_CHAMP:                                              !
   !  * It is advised to check the field produced with key word INFO=2.                         !
   !  * Key words OPTION and NOM_PARA can have an influence on the result.                      !
   !                                                                                            !
   !                                                                                            !
   ! This is a warning. If you do not understand the meaning of this                            !
   !  warning, you can obtain unexpected results!    

Then I change the group of elements to only 3D elements, but regardless get the following error:

<EXCEPTION> <ELEMENTS3_16>                                   !
   !                                                              !
   !  Behavior:  SIMO_MIEHE not established                       !
   !                                                              !
   ! --------------------------------------------                 !
   ! Contexte du message :                                        !
   !    Option         : RAPH_MECA                                !
   !    Type d'élément : MECA_HEXS8                               !
   !    Maillage       : mesh                                     !
   !    Maille         : M78197                                   !
   !    Type de maille : HEXA8                                    !
   !    Cette maille appartient aux groupes de mailles suivants : !
   !       plate All                                              !
   !    Position du centre de gravité de la maille :              !
   !       x=0.032300 y=0.019000 z=0.001667   

Same message if I change behavior to GROT_GDEP.

Also, I found that STAT_NON_LINE is much slower in iterating against DYNA_NON_LINE. Why is that? One would think that STAT_NON_LINE should be faster since inertial terms need not be calculated.

Thanks
Anirudh

#16 Re: Code_Aster usage » 3D_SI for hexa elements not working » 2019-01-26 16:26:25

Hi William,
I checked with the documentation. It is mentioned that 3d_SI works TETRA10,HEXA8(linear hex) and HEXA20, although certain entries are still unclear to me.
However, I could use TETRA4(linear tet mesh) in another problem without issues.

I have another problem. The residual at the beginning of simulation when the pointed/sharp cornered plate comes into contact with the cylinders, is too large. Only very small time step, like 1e-05 can be converged. Also, the RESI_FROT is sometimes of the order of 1e+15.
What can I do to ease convergence at a time step of 0.01 s and how does RESI_FROT come into play?

Thanks a lot.
Anirudh

#17 Code_Aster usage » 3D_SI for hexa elements not working » 2019-01-25 10:47:32

Anirudh
Replies: 8

Hi,
I'm trying to do a simulation composed entirely of hex8 elements(3 volumes).
When I try to use under integrated assignment using 3d_SI, I get this error:


<EXCEPTION> <ELEMENTS4_73>                                                         !
   !                                                                                    !
   ! Les comportements écrits en configuration de référence ne sont pas disponibles     !
   ! sur les éléments linéaires pour la modélisation 3D_SI.                             !
   !                                                                                    !
   ! Pour contourner le problème et passer à un comportement en configuration actuelle, !
   ! ajoutez un état initial nul au calcul.                                             !
   !                                                                                    !
   ! --------------------------------------------                                       !
   ! Contexte du message :                                                              !
   !    Option         : RAPH_MECA                                                      !
   !    Type d'élément : MECA_HEXS8                                                     !
   !    Maillage       : mesh                                                           !
   !    Maille         : M21304                                                         !
   !    Type de maille : HEXA8                                                          !
   !    Cette maille appartient aux groupes de mailles suivants :                       !
   !       plate All                                                                    !
   !    Position du centre de gravité de la maille :                                    !
   !       x=0.597500 y=0.097500 z=-0.004000     

I could use 3D_SI with tetra4 elements and it worked without errors.
Why an issue with hex elements? Attaching all the files.

Thanks
Anirudh

#18 Re: Code_Aster usage » Simulate a mesh composed by more solids » 2019-01-16 18:32:41

Hello,
This error is very common. You need to explode the compound into its constituent solid. Now make a mesh on compound without any hypothesis. Now create 3 sub meshes on the main mesh, each time cycling over the exploded solids. Algorithm is your wish, Netgen 1D-2D-3D works fine. Go ahead and compute the main mesh. Now create volume groups by selecting the top level mesh and using "create groups from geometry" icon. Use this mesh in Aster study.

Regards
Anirudh

#19 Re: Code_Aster usage » Post-processing issues » 2019-01-07 16:14:36

Hi,
The max aspect ratio in the mesh I uploaded is 8.58. Bad enough but not that bad. Since there is no direct way to get a good mesh with low Aspect ratio, how much are the results trustworthy then? Can we frame a correlation between Aspect ratio and result quality(say max von mises stress) of a particular element? Can Homard be used here?

Thanks
Anirudh

#20 Re: Code_Aster usage » Post-processing issues » 2019-01-07 06:50:35

Hi,
Can we force GMSH to create elements that satisfy a minimum criterion? For example it should create elements only when their aspect ratio is lesser than a given value say 10. I am also curious to know about the gamma criterion in GMSH. Have you been  able use gamma to some effect to create only good quality elements?

Thanks
Anirudh

#21 Re: Code_Aster usage » Post-processing issues » 2019-01-04 17:09:33

Hi,
I can still see highly distorted tetrahedra near root of wheel teeth which result from small edges.The aspect ratio of these elements should be > 30(just a guess). Can GMSH skip over those edges ?

Thanks

#22 Re: Code_Aster usage » Post-processing issues » 2019-01-04 14:54:30

Hello,
I used Netgen in Salome to mesh the bodies. I restricted the min size else Netgen will create lot of elements and we want to keep the element count low.We want to get away with just enough elements, especially if 3D elements are involved.
Unfortunately, a bad CAD geometry cannot produce a good mesh, especially if the CAD data has lots of small edges. I still have not found a way to bypass small edges while meshing.I attach the CAD file. See if you can get a better mesh.


Regards
Anirudh

#23 Re: Code_Aster usage » Post-processing issues » 2019-01-04 03:34:58

Hi,
I ran it on version 13.4 parallel.
Its a concern if defined variables can't be injected in the formula in latest version.
There are more issues I am finding out. The results are sensitive to RESI_GEOM field. A change of RESI_GEOM in contact definition from 1 to 0.01 caused a change in Max Von Mises stress from order of 10^6 to 10^8 Pa. Thats like 100 times more. Why does that happen?
I could also see that this change caused the penetration of worm mesh into wheel mesh reducing a lot near hotspot areas. This observation bring forth the assumption underlying CONTINUE contact formulation. The CONTINUE formulation guarantee that no penetration will occur and maybe it does that by augmenting the penalty coefficient but it doesn't seem to work when the mesh is rather coarse. So here, a penetration occurs that depends on RESI_GEOM.
Also, if I try blocking DZ of wheel(perpendicular displacements) by using AFFE_CHAR_CINE, the max Von Mises stress are of order of 10^14 Pa.
Also, how do you get GMSH to display this: mesh deform by displacements but shows von mises stress contours at each tine step ? Again, When I try in GMSH , I always see a duplicate mesh at first time step.

Thanks
Anirudh

#24 Re: Code_Aster usage » Post-processing issues » 2019-01-03 13:17:45

Hi,
I believe at line 48 its declared x=1

Regards
Anirudh

#25 Re: Code_Aster usage » Post-processing issues » 2019-01-02 12:41:10

Hi,
Attaching the test run. I tried these:
TYPE_MAILLE=('3D','POI1') ------->CA Doesn't allow more than 1 argument
TYPE_MAILLE='TOUT'           ------->CA prints the same result with different fields same as before
Would be nice to see mass node to be printed as well.
Though it is good enough for me.

Thanks
Anirudh

Board footer