Welcome to the forums. Please post in English or French.

You are not logged in.

#1 Re: Code_Aster usage » 'Bus Error' during execution of parallel version 14.4 » 2020-07-02 14:58:53

mf

Hi,

thank you for taking the time to answer, I really appreciate that.

I also tested smaller problems with <1M DOFs. The error occurs also. Therefore, I also rule out insufficient RAM or disk space, both are plenty in this machine (I always follow the use of RAM with htop and the use of disk space with watch -d df, I am not getting anywhere close to the limits).

Of course I tried:

mpi_nbcpu 12
mpi_nbnoued 1
ncpus 4

but as I mentioned, with these parameters, the dual CPU machine next to it is equally as fast (dual 8-core that I use with the following parameters to achieve a minimum of computation time:

mpi_nbcpu 8
mpi_nbnoued 1
ncpus 2.)

The installation of MPI is ok, I checked with small programs compiled with mpicc.

Hardware is ok, runs flawlessly 247 on other software.

I don't suspect source code, because then I wouldn't be the only one with this problem.

The only 2 possibilities left are:
-) faulty configuration/compilation (I suspect MUMPS_MPI, it never reaches the matrix decomposition stage..)
-) it's normal to get this error (I suspect it is not, as CA would not scale very well in this case)

So, I guess I have to live with that at the moment...

Thank you anyway,

Mario.

#2 Re: Code_Aster usage » Beginner with Code_Aster - Incorrect Deflections » 2020-06-25 13:44:22

mf

Hi again,

if I switch materials and change 'Wood' to E=15000MPa I get ~9mm displacement in the center of the structure. Are your materials assigned as intended?

Regarding your .comm from above: CALC_CHAMP and MECA_STATIQUE (or STAT_NON_LINE etc.) should have the same name (or use 'reuse the input object' in Salome-Meca), otherwise DEPLs are not in the results.. I don't know why.

Mario

#3 Re: Code_Aster usage » 'Bus Error' during execution of parallel version 14.4 » 2020-06-24 10:58:53

mf

Hi,

can anyone of you with a dual CPU or quad CPU system please check if it is possible (or just answer without checking if possible for sure), without error, to calculate, for example on a dual 10-core system on an otherwise working parallel installation with:

mpi_nbcpu 20
mpi_nbnoued 1
ncpus 1 or 2 (2 for quad CPU, not feasible on dual CPU system)

I know it is not the fastest way to calculate without OpenMP, but it would show if it is possible at all without error. If so, it would indicate that there might be a problem with configuration/compilation of this docker version.

Thank you,

Mario.

#4 Re: Code_Aster usage » Beginner with Code_Aster - Incorrect Deflections » 2020-06-23 22:45:33

mf

Hi,

this is most certainly a problem with units.

Looking at your comm-file and the material parameters (E and rho), I'd assume you want to use the N-mm-system (all SI units but mm instead of m). In this case, your material 'Wood' has a very low elastic modulus of only 17MPa. Elastic moduli of woods are about 300-1000 times higher in MPa.

Also, it means your forces are in N. Is this correct? Your force in -z would be 0.932N, seems low if you are talking about kN in your Python example. That is, if both can be compared somehow.

So, I'd recommend to check your units,

Mario.

EDIT: oh sorry, it is a force arete, therefore 0.932N/mm. So this part might be correct?

#5 Re: Code_Aster usage » 'Bus Error' during execution of parallel version 14.4 » 2020-06-23 15:05:25

mf

Ok, I managed to convert to MED 3.2 with the latest SM, that seems to be close enough to 3.1.1.

Here is the result I get with V13 and MED 3.2 and

mpi_nbcpu 24
mpi_nbnoued 1
ncpus 2

within the export file and the same simulation:

!-------------------------------------------------------------!
   ! <EXCEPTION> <APPELMPI_5>                                    !
   !                                                             !
   !  Erreur lors de l'appel à une fonction MPI.                 !
   !  Les détails de l'erreur devraient être affichés ci-dessus. !
   !-------------------------------------------------------------!
   

<F> MPI Error code 138008847:
    Other MPI error, error stack:
MPI_Recv(200).........................: MPI_Recv(buf=0x7fe72c82f010, count=160001, MPI_INTEGER, src=5, tag=29, comm=0x84000004, status=0x7ffdb06d79b0) failed
MPID_Recv(132)........................:
MPID_nem_lmt_RndvRecv(168)............:
do_cts(562)...........................:
MPID_nem_lmt_shm_start_recv(181)......:
MPID_nem_allocate_shm_region(886).....:
MPIU_SHMW_Seg_create_and_attach(897)..:
MPIU_SHMW_Seg_create_attach_templ(620): write failed

   
   !-------------------------------------------------------------!
   ! <EXCEPTION> <APPELMPI_5>                                    !
   !                                                             !
   !  Erreur lors de l'appel à une fonction MPI.                 !
   !  Les détails de l'erreur devraient être affichés ci-dessus. !
   !-------------------------------------------------------------!
   
/tmp/aster/global/global/global/global/global/global/global/global/global/global/global/global/global/global/global/global/global/global/global/global/global/global/global/global/global/global/mpi_script.sh: line 37:   221 Bus error               (core dumped) /home/aster/aster/13.6_mpi/bin/aster /home/aster/aster/13.6_mpi/lib/aster/Execution/E_SUPERV.py -commandes fort.1 --max_base=250000 --num_job=72 --mode=interactif --rep_outils=/home/aster/aster/outils --rep_mat=/home/aster/aster/13.6_mpi/share/aster/materiau --rep_dex=/home/aster/aster/13.6_mpi/share/aster/datg --numthreads=2 --suivi_batch --memjeveux=16384.0 --tpmax=357900.0
EXECUTION_CODE_ASTER_EXIT_72=135
/tmp/aster/global/global/global/global/global/global/global/global/global/global/global/global/global/global/global/global/global/global/global/global/global/global/global/global/global/global/mpi_script.sh: line 37:   248 Bus error               (core dumped) /home/aster/aster/13.6_mpi/bin/aster /home/aster/aster/13.6_mpi/lib/aster/Execution/E_SUPERV.py -commandes fort.1 --max_base=250000 --num_job=72 --mode=interactif --rep_outils=/home/aster/aster/outils --rep_mat=/home/aster/aster/13.6_mpi/share/aster/materiau --rep_dex=/home/aster/aster/13.6_mpi/share/aster/datg --numthreads=2 --suivi_batch --memjeveux=16384.0 --tpmax=357900.0
EXECUTION_CODE_ASTER_EXIT_72=135
EXIT_COMMAND_72_00000022=0
<INFO> restore bases from /tmp/aster/global/global/global/global/global/global/global/global/global/global/global/global/global/global/global/global/global/global/global/global/global/global/global/global/global/global/BASE_PREC

<A>_ALARM          no glob/bhdf base to restore


<E>_ABNORMAL_ABORT execution aborted (comm file #1)

<INFO> Code_Aster run ended, diagnostic : <E>_ABNORMAL_ABORT

--------------------------------------------------------------------------------
Content of /tmp/aster/global/global/global/global/global/global/global/global/global/global/global/global/global/global/global/global/global/global/global/global/global/global/global/global/global/global after execution

.:
total 60352
drwx------  4 aster aster     4096 Jun 23 14:00 .
drwxr-xr-x 25 aster aster     4096 Jun 23 14:00 ..
-rw-r--r--  1 aster aster     1037 Jun 23 13:55 72.export
drwxr-xr-x  2 aster aster     4096 Jun 23 13:55 BASE_PREC
drwxr-xr-x  2 aster aster     4096 Jun 23 13:55 REPE_OUT
-rw-r--r--  1 aster aster     2653 Jun 23 13:55 config.txt
-rw-r--r--  1 aster aster    25023 Jun 23 13:55 fort.1
-rw-r--r--  1 aster aster    25023 Jun 23 13:55 fort.1.1
-rw-r--r--  1 aster aster     1843 Jun 23 13:55 fort.1.2
-rw-r--r--  1 aster aster 34298146 Jun 23 13:55 fort.2
-rw-r--r--  1 aster aster 23920390 Jun 23 13:55 fort.20
-rw-r--r--  1 aster aster  3451256 Jun 23 13:55 fort.3
-rw-r--r--  1 aster aster    29630 Jun 23 13:55 fort.4
-rw-r--r--  1 aster aster       21 Jun 23 14:00 fort.6
-rwxr-xr-x  1 aster aster     3581 Jun 23 13:55 mpi_script.sh

REPE_OUT:
total 8
drwxr-xr-x 2 aster aster 4096 Jun 23 13:55 .
drwx------ 4 aster aster 4096 Jun 23 14:00 ..


--------------------------------------------------------------------------------
Size of bases


--------------------------------------------------------------------------------
Copying results


<A>_COPYFILE       no such file or directory: fort.80


<A>_COPYFILE       no such file or directory: fort.5

copying .../fort.6...                                                   [  OK  ]

<E>_ABNORMAL_ABORT Code_Aster run ended



---------------------------------------------------------------------------------
                                            cpu     system    cpu+sys    elapsed
---------------------------------------------------------------------------------
   Preparation of environment              0.00       0.00       0.00       0.00
   Copying datas                           0.05       0.09       0.14       0.33
   Code_Aster run                        554.91      61.90     616.81     308.76
   Copying results                         0.01       0.02       0.03       0.01
---------------------------------------------------------------------------------
   Total                                 555.06      62.06     617.12     309.48
---------------------------------------------------------------------------------
   (*) cpu and system times may be not correctly counted using mpirun.

as_run 2018.0

------------------------------------------------------------
--- DIAGNOSTIC JOB : <F>_ABNORMAL_ABORT
------------------------------------------------------------


EXIT_CODE=4

The output of MPI looks a bit more detailed at first glance, but I think it is essentially the same error.

Hope that helps.

=================================================================================

EDIT: in V14 I also tried alterations of the mpirun call in asrun. I tried combinations of rank-by, map-by and bind-to (numa, socket, etc) options in the mpirun call but did not succeed. The error persists. Also I tried without MATR_DISTRIBUEE='OUI', error persists.

#6 Re: Code_Aster usage » 'Bus Error' during execution of parallel version 14.4 » 2020-06-23 14:28:21

mf

Hi,

thank you. I tried the v13-docker on the same machine, but I can't use the current simulation due to this (different) error:

!--------------------------------------------------------------------!
   ! <A> <MED_24>                                                       !
   !                                                                    !
   !   -> Le fichier n'a pas été construit avec la même version de MED. !
   !   -> Risque & Conseil :                                            !
   !      La lecture du fichier peut échouer !                          !
   !                                                                    !
   !                                                                    !
   !    Version de la bibliothèque MED utilisée par Code_Aster:  3 3 1  !
   !                                                                    !
   !    Version de la bibliothèque MED pour créer le fichier  :  4 0 0  !
   !                                                                    !
   !   -> Incohérence de version détectée.                              !
   !                                                                    !
   !                                                                    !
   ! Ceci est une alarme. Si vous ne comprenez pas le sens de cette     !
   ! alarme, vous pouvez obtenir des résultats inattendus !             !
   !--------------------------------------------------------------------!

Basically, it means, that the mesh files are too new?... :-(   I'd have to create an 'older' example or use a different example.

Maybe one of the test cases will provoke the same errors. I don't know yet.

This will take a while. I will come back to you.

#7 Re: Code_Aster usage » 'Bus Error' during execution of parallel version 14.4 » 2020-06-23 09:51:15

mf

Hi,

I never managed to do the installation myself. Like you say, this is quite a tedious task (or quite frankly, a nightmarish experience).

I use this docker instead, this is where the error comes up. It is an encapsulated, virtual environment (not allowed to post links, just add the https://):

github.com/tianyikillua/code_aster_on_docker

Just install docker on your OS (I recommend docker on Linux, it is faster) and follow the instructions there, and you are good to go with the parallel version. No compiling is needed.

Cheers,

Mario.

#9 Code_Aster usage » 'Bus Error' during execution of parallel version 14.4 » 2020-06-20 18:25:05

mf
Replies: 11

Hello,

during testing of new hardware I encountered a strange error during execution of the parallel version. I am using the docker version of 14.4_MPI. The error seems to show up right after the first iteration of a nonlinear simulation when I enter a mpi_nbcpu greater than the number of cores of one cpu in this system. Up until to this, everything is fine (partitioning etc..). I would understand the error if this was a single CPU system, but this is not the case. The system has 4 CPUs, each with 12 cores (Total 48 cores). Apparently, I cannot enter, for example:

mpi_nbcpu 24
mpi_nbnoued 1
ncpus 2

From my experience, ncpus should always be 2 and mpi_nbcpu should be TOTAL_CORES/ncpus to be quickest.

I have to say that the model is fine otherwise. The error seems to be all about what I enter in mpi_nbcpu. Here is the error when I put in above parameters (24/1/2) in my export-file:

Instant de calcul:  1.000000000000e+00
----------------------------------------------------------------------------------------------------------------------------------------------------------
|     CONTACT    |     CONTACT    |     NEWTON     |     RESIDU     |     RESIDU     |     OPTION     |     CONTACT    |     CONTACT    |     CONTACT    |
|    BCL. GEOM.  |    BCL. CONT.  |    ITERATION   |     RELATIF    |     ABSOLU     |   ASSEMBLAGE   |    PRESSURE    |     CRITERE    |  PENETRATION   |
|    ITERATION   |    ITERATION   |                | RESI_GLOB_RELA | RESI_GLOB_MAXI |                |    ERROR       |    VALEUR      |                |
----------------------------------------------------------------------------------------------------------------------------------------------------------
|     CONTACT    |     CONTACT    |     NEWTON     |     RESIDU     |     RESIDU     |     OPTION     |     CONTACT    |     CONTACT    |     CONTACT    |
|    BCL. GEOM.  |    BCL. CONT.  |    ITERATION   |     RELATIF    |     ABSOLU     |   ASSEMBLAGE   |    PRESSURE    |     CRITERE    |  PENETRATION   |
|    ITERATION   |    ITERATION   |                | RESI_GLOB_RELA | RESI_GLOB_MAXI |                |    ERROR       |    VALEUR      |                |
----------------------------------------------------------------------------------------------------------------------------------------------------------
|     CONTACT    |     CONTACT    |     NEWTON     |     RESIDU     |     RESIDU     |     OPTION     |     CONTACT    |     CONTACT    |     CONTACT    |
|    BCL. GEOM.  |    BCL. CONT.  |    ITERATION   |     RELATIF    |     ABSOLU     |   ASSEMBLAGE   |    PRESSURE    |     CRITERE    |  PENETRATION   |
|    ITERATION   |    ITERATION   |                | RESI_GLOB_RELA | RESI_GLOB_MAXI |                |    ERROR       |    VALEUR      |                |
----------------------------------------------------------------------------------------------------------------------------------------------------------
/tmp/aster/global/global/global/global/global/global/global/mpi_script.sh: line 37:  1837 Bus error               (core dumped) /home/aster/aster/14.4_mpi/bin/aster /home/aster/aster/14.4_mpi/lib/aster/Execution/E_SUPERV.py -commandes fort.1 --max_base=250000 --num_job=1560 --mode=interactif --rep_outils=/home/aster/aster/outils --rep_mat=/home/aster/aster/14.4_mpi/share/aster/materiau --rep_dex=/home/aster/aster/14.4_mpi/share/aster/datg --numthreads=2 --suivi_batch --tpmax=357900.0 --memjeveux=8192.0
EXECUTION_CODE_ASTER_EXIT_1560=135
EXIT_COMMAND_1560_00000022=0
<INFO> restore bases from /tmp/aster/global/global/global/global/global/global/global/BASE_PREC

<A>_ALARM          no glob/bhdf base to restore


<E>_ABNORMAL_ABORT execution aborted (comm file #1)

<INFO> Code_Aster run ended, diagnostic : <E>_ABNORMAL_ABORT

--------------------------------------------------------------------------------
Content of /tmp/aster/global/global/global/global/global/global/global after execution

.:
total 60332
drwx------  4 aster aster     4096 Jun 20 17:08 .
drwxr-xr-x 26 aster aster     4096 Jun 20 17:08 ..
-rw-r--r--  1 aster aster      901 Jun 20 17:03 1560.export
drwxr-xr-x  2 aster aster     4096 Jun 20 17:03 BASE_PREC
drwxr-xr-x  2 aster aster     4096 Jun 20 17:03 REPE_OUT
-rw-r--r--  1 aster aster     2756 Jun 20 17:03 config.txt
-rw-r--r--  1 aster aster    25023 Jun 20 17:03 fort.1
-rw-r--r--  1 aster aster    25023 Jun 20 17:03 fort.1.1
-rw-r--r--  1 aster aster     1843 Jun 20 17:03 fort.1.2
-rw-r--r--  1 aster aster 34291971 Jun 20 17:03 fort.2
-rw-r--r--  1 aster aster 23914368 Jun 20 17:03 fort.20
-rw-r--r--  1 aster aster  3444719 Jun 20 17:03 fort.3
-rw-r--r--  1 aster aster    26189 Jun 20 17:03 fort.4
-rw-r--r--  1 aster aster       21 Jun 20 17:08 fort.6
-rwxr-xr-x  1 aster aster     2256 Jun 20 17:03 mpi_script.sh

REPE_OUT:
total 8
drwxr-xr-x 2 aster aster 4096 Jun 20 17:03 .
drwx------ 4 aster aster 4096 Jun 20 17:08 ..


--------------------------------------------------------------------------------
Size of bases


--------------------------------------------------------------------------------
Copying results


<A>_COPYFILE       no such file or directory: fort.80


<A>_COPYFILE       no such file or directory: fort.5

copying .../fort.6...                                                   [  OK  ]

<E>_ABNORMAL_ABORT Code_Aster run ended



---------------------------------------------------------------------------------
                                            cpu     system    cpu+sys    elapsed
---------------------------------------------------------------------------------
   Preparation of environment              0.00       0.00       0.00       0.00
   Copying datas                           0.05       0.09       0.14       0.22
   Code_Aster run                        278.98      31.83     310.81     311.31
   Copying results                         0.00       0.00       0.00       0.01
---------------------------------------------------------------------------------
   Total                                 279.15      31.98     311.13     311.91
---------------------------------------------------------------------------------
   (*) cpu and system times may be not correctly counted using mpirun.

as_run 2019.0

------------------------------------------------------------
--- DIAGNOSTIC JOB : <F>_ABNORMAL_ABORT
------------------------------------------------------------

The 'bus error' seems to be a hardware error, similar to a segmentation fault. Thus, CA does not seem to report any reasons for the error.

I looked at the mpi_script.sh line 37, but I do not see anything wrong there. It looks like this for each proc.X (X for MPI-process nr.):

#!/bin/bash
#
# script template to run Code_Aster using MPI
#
#
# This template contains following Python strings formatting keys :
#
#     cmd_to_run         : Code_Aster command line
#     mpi_get_procid_cmd : command to retreive processor ID
#
# automatically generated for job number #1560
#

ASRUN_PROCID=`echo $PMI_RANK`

if [ -z "$ASRUN_PROCID" ]; then
   echo "Processor ID is not defined !"
   exit 4
fi

ASRUN_WRKDIR=/tmp/aster/global/global/global/global/global/global/proc.$ASRUN_PROCID

if [ -e $ASRUN_WRKDIR ]; then
   rm -rf $ASRUN_WRKDIR
fi
if [ ! -d /tmp/aster/global/global/global/global/global/global ]; then
   mkdir -p /tmp/aster/global/global/global/global/global/global
fi
cp -r /tmp/aster/global/global/global/global/global/global/global $ASRUN_WRKDIR
if [ $? -ne 0 ]; then
    echo "non zero exit status for : cp -r /tmp/aster/global/global/global/global/global/global/global $ASRUN_WRKDIR"
    exit 4
fi
chmod 0700 $ASRUN_WRKDIR

cd $ASRUN_WRKDIR
( . /home/aster/aster/14.4_mpi/share/aster/profile.sh ; /home/aster/aster/14.4_mpi/bin/aster /home/aster/aster/14.4_mpi/lib/aster/Execution/E_SUPERV.py -commandes fort.1  --max_base=250000 --num_job=1560 --mode=interactif --rep_outils=/home/aster/aster/outils --rep_mat=/home/aster/aster/14.4_mpi/share/aster/materiau --rep_dex=/home/aster/aster/14.4_mpi/share/aster/datg --numthreads=2 --suivi_batch --tpmax=357900.0 --memjeveux=8192.0 ; echo EXECUTION_CODE_ASTER_EXIT_1560=$? ) | tee fort.6
iret=$?

if [ -f info_cpu ]; then
   infos=`cat info_cpu`
   echo "PROC=$ASRUN_PROCID INFO_CPU=$infos"
fi

if [ $ASRUN_PROCID -eq 0 ]; then
   echo "Content after execution of $ASRUN_WRKDIR :"
   ls -la . REPE_OUT

   rm -f /tmp/aster/global/global/global/global/global/global/global/glob.* /tmp/aster/global/global/global/global/global/global/global/bhdf.* /tmp/aster/global/global/global/global/global/global/global/pick.*
   rm -rf /tmp/aster/global/global/global/global/global/global/global/REPE_OUT
   # to save time during the following copy
   rm -rf $ASRUN_WRKDIR/REPE_IN $ASRUN_WRKDIR/Python
   cp -rf $ASRUN_WRKDIR/* /tmp/aster/global/global/global/global/global/global/global/
   kret=$?
   if [ $kret -gt $iret ]; then
      iret=$kret
   fi
fi
rm -rf $ASRUN_WRKDIR

I did the following additional tests on this system:
1) OpenFOAM MPI, runs fine with 48 MPI processes (Of course, that does not have anything to do with the CodeAster-Docker, which is by definition encapsulated with its own MPI-installation). Nevertheless, it shows that the hardware is ok.
2) I tested several C-programs for OpenMPI, e.g. Hello_world_MPI etc. Also here, I may use 48 MPI processes (or even more) without any problems. Hardware seems ok.
3) I compiled simple MPI C programs inside the docker, here I am able to use 48 MPI processes also. The MPI installation inside the docker is OK.

To summarize, I am not sure what is wrong. The problem is, if I am not able to enter 24/1/2 this 4 CPU system is not faster than a similar system with 2 CPUs, I would not be able to use the advantage of the 2 additional CPUs.

I'd be glad for any advice, perhaps I only have to change some parameters? Or perhaps the installation is faulty?

Please let me know if you need more data to judge this error.

Thank you,

Mario.

#10 Re: Code_Aster installation » Code_Aster inside a Docker container available for everyone » 2020-05-06 11:46:07

mf

Dear Tianyi,

first of all, thank you for this, this is really great stuff. I tried to compile the parallel version and did not succeed (I finally gave up, none of the online tutorials is 100% correct), so this is a way out for all users who are not software engineers to use the MPI version. Under Linux it seems to be VERY fast.

However, I have one question: can I use this docker on more than one node (nodes are connected via Infiniband)? Or are there modifications to the docker necessary? I cannot test this at the moment, thus my question. There is also something called 'docker swarm' could this be a possibility to run it on more than one node? As you have guessed, I am not a software engineer, so these might be dumb questions....

Thank you anyway,

Mario.

#11 Re: Code_Aster usage » [SOLVED]Under which format is preferable to import? » 2020-02-25 17:15:40

mf

Hello,

in a 'real life' engineering situation? .step for sure. It is standardized in ISO10303. Wouldn't want to offend someone, but I never saw anyone using .brep in the last 20 years or so. Every CAD-software can read/generate .step (or .stp). With .brep, not so sure about that,

ciao

Mario.

#12 Re: Code_Aster usage » Radial force » 2020-02-24 20:09:17

mf

Hi again,

I had a look around in the forum, and unfortunately cylindrical coordinates do not seem possible in Aster. Albeit, this here seems to be  somewhat of an exception, but not exactly:
According to post id=18200 applying a radial displacement is possible, although only to a surface. There is the not very elegant workaround if you want to put an IMPO to an edge, or in this case, a slender ring resembling an edge. I'm not sure if this is what you want, but have a look at LOAD 3 in the attached example: I partitioned the top circular face and applied a FACE_IMPO with a DNOR to the outer ring (I excluded the inner part of the circular top surface). It is NOT PRETTY but maybe it helps. So somehow it is the displacement version of applying a PRES_REP to a slender ring (equally ugly 'solution').

What about an axisymmetric model?

Ciao,

Mario.

#13 Re: Code_Aster usage » Radial force » 2020-02-24 17:35:01

mf

Hello,

PRES_REP can't do that, I'm pretty sure. But what if your surface is really slender, like a very thin ring (partition the model)?

I'm sure the rest is possible in Aster in a few different ways, although I'm maybe not good enough for that. I can imagine doing it with a LIAISON_DDL, but I have to figure that out (somehow put the definition of a circle in 3D in there and add a little dr to the radius..force has to be calibrated then, though.). I'm really not sure....

Maybe a real Aster-pro can give it a try?

Mario.

Edit: LIAISON_DDL can only do linear relations...so this does not seem possible because of the quadratic terms....

#14 Code_Aster development » Proposal for new function CALC_FLUX » 2020-02-22 16:12:16

mf
Replies: 0

Dear developers,

in my post id=24652 I asked if it is planned to include a possible new function to Aster/Asterstudy that calculates the normal heat  flux through an arbitrary surface. It should not be too different from CALC_PRESSION.

As nobody answered in that post: are there any plans to do this?

Kind Regards et mercy,

Mario.

#15 Re: Code_Aster usage » What is the best strategy to follow when you have many LIASON_MAIL? » 2020-02-22 16:05:02

mf

Hello,

be aware that a LIAISON_MAIL cannot be turned off by a multiplier function=0 in MECA_STATIQUE or STAT_NON_LINE (FONC_MULT). It is always active within one analysis (meaning one single MECA_STATIQUE or STAT_NON_LINE). It is a common source for errors! If you want to turn a LIAISON off, you have to use at least 2 chained MECA_STATIQUEs or STAT_NONLINEs.

Bye,

Mario.

#16 Re: Code_Aster usage » Python Loop » 2020-02-22 16:00:06

mf

Hello,

seems your bracketing in lines 21-23 in your .comm is erroneous? Life is easier with an editor that can do bracket checking (e.g. 'Geany').

If you intend to use Geany, add .comm to the filetype definitions under Python (Python=*.py;*.pyw;SConstruct;SConscript;wscript;*.comm;). It then will check your .comm as if it was a python file (what it partly is....).

Bye,

Mario.

#17 Re: Code_Aster usage » Error when using force_face » 2020-02-22 15:52:46

mf

Hello,

the error says, that you tried to address a group in your FORCE_FACE command that is not present in your model. Maybe you forgot a group? If you created your groups in the geometry module, did you import them to your mesh also ('create groups from geometry'-command in Salome)? Did you enter the group names in Aster by hand (typo)?

Bye,

Mario.

#18 Re: Code_Aster usage » Radial force » 2020-02-22 15:37:54

mf

Hello,

a negative PRES_REP on it's curved surface would act like a radial force pointing outwards. It's radius will increase and depending on your NU it will get shorter. Is that what you want?

Bye,

Mario.

#19 Re: Code_Aster usage » How to extract Nodal Equivalent Plastic Strain from COQUE_3D » 2020-02-17 22:02:24

mf

Good evening,

for what it's worth, with DKT elements and RELATION=VMIS_ISOT_TRAC I am able to extract V1 of VARI_NOEU with:

-) a POST_CHAMP of VARI_ELNO
-) a CALC_CHAMP of VARI_NOEU

Maybe it helps,

Mario.

#20 Re: Code_Aster usage » How do I assemble three meshes together into one? » 2020-02-14 07:53:15

mf

Hello,

the 3 separate meshes are there, I know. But the group names are totally different. That's a lot of work for somebody who sees your model for the first time. I will try renaming if I have time tomorrow. Also, E is smaller then as these parts are bigger.

Would you be able to install the 2019 version? You should do that anyway, you know bugs etc....

Units: I don't know where the idea of scaling a model comes from, many new users seem to do that. I do not recommend it. 1. it creates a whole lot of additional work, 2. it may lead to wrong results if you forget anything. If you do crack analysis you will get very weird units (square roots involved)!

Impact of NL-material is in increased number of iterations for sure, and possibly problems of convergence (also more resources RAM-wise are needed).

Import of .comm: go into Asterstudy, deactivate or delete your stage and choose 'Add Stage from file'. However, you will always have to choose a new results-file in 'Output' IMPR_RESU, as your directory structure is different. So Asterstudy always deletes that from your imported .comm in advance. Also check the input of the Mesh, it should do the same here. Then it should run,

bye,

Mario.

#21 Re: Code_Aster usage » How do I assemble three meshes together into one? » 2020-02-13 22:08:30

mf

Hi,
in regards to what you want to do  (contact with friction), the .comms I posted are not correct.

To your questions:
1.) "Does it mean that if I want to create a model that has multiple materials, the AFFE_MATERIAU is to assign the material to the part?"  AFFE_MATERIAU is to assign the material(s) to the MODEL.
2.) And does this also mean that I am supposed to use AFFE_CHAR_MECA in order to "assemble" a model that has multiple parts and multiple materials together in preparation for the analysis (regardless or whether it is linear static or non-linear static)?          AFFE_CHAR_MECA are your boundary conditions (forces, displcements, temperature...), at least some of them. BTW, your model does not have multiple parts in that regard, as you used a compound in the first place. If you had imported three meshes (take a look at my example with the 3 boxes) you could have connected them with LIAISON_MAILs (those are indeed in AFFE_CHAR_MECA) or DEFI_CONTACT (ultimately you will use DEFI_CONTACT as you describe in your last post, this is separated from AFFE_CHAR_MECA).
3.) "The tutorial from Cyprien said that contact = LIAISON_MAIL, so as a result of this -- this is why I have LIAISON_MAILLAGE (contacts) because that's what I thought the French term means." LIAISON is a form of contact, but it's not always the right choice. The example of Cyprien with the glued boxes are 2 separate meshes, no compound (2 times LIRE_MAILLAGE, 1 ASSE_MAILLAGE, then he uses LIAISON_MAIL to connect them, just as I did in the 3 Box example). Just imagine a sphere on a flat surface, like in a bearing: LIAISON_MAIL between the two would also deform the part of the planar surface that is not in physical contact with the sphere. Every node of the slave moves with the master nodes. That would not be correct, this is a case for DEFI_CONTACT, especially when the surface in contact increases with applied force (the sphere on a plane is a point contact first, with increasing load it becomes a circular contact). So a LIAISON is even more than glueing when the surfaces do not match, because EVERY node of the slave has to move. Also those far away! See the sketch attached... with LIAISON you are even able to 'connect' two parts that will ALWAYS be separated from each other.
4.) Again: for modeling 'real' contact you MUST import separate meshes. A compound does not work with DEFI_CONTACT.
5.)"My thought process was that if I can get the "glued" contact to work between the three parts, then I can change it from a glued contact to a friction contact."   That's ok, but further modifications will be necessary (3 separate meshes..loads, model, materials will be the same +DEFI_CONTACT and some minor modifications to STAT_NON_LINE + FROTTEMENT)
6.)"In regards to the weak spring, would there be a way to generate a mesh from the CAD files so that I will get a conformal mesh? Or how would I know that there is a singularity?"    Yes, Salome_Meca has some very good meshing capabilities. For a nonlinear calculation, like before, I ALWAYS do a conversion of 2nd order meshes to 1str order, just to save time when building and testing the model. This conversion was not possible with your mesh. Sorry, I really don't know why. Unfortunately, singularities may have several reasons (not only problems with the mesh...insufficient or conflicting BCs are also a possibility). You would know you have a singularity, because you would not be able to get past the first iteration. Aster immediately stops with default preferences. For example: any part that is not held in place by either a DDL_IMPO, a LIAISON, a CONTACT etc and is able to move in space will cause a singularity in a static calculation. If you applied a force and a DDL_IMPO on the same node--> singularity. There are many possibilities.

Bye,

Mario.

#22 Re: Code_Aster usage » How do I assemble three meshes together into one? » 2020-02-13 20:09:17

mf

Hi again,

here's above .comm with your nonlinear materials. I am really not sure about the units though (as you suspected Aster is unitless).

Bye,

Mario.

#23 Re: Code_Aster usage » How do I assemble three meshes together into one? » 2020-02-13 19:51:12

mf

Hello,

it took me a while to understand your question. I modified your LHD.hdf and managed to run it with linear materials. At the moment, I am converting it to nonlinear. I hope to get it running within the next hour.

You had some minor mistakes in there. I did the following:
1) commented the functions, thus turned every material to linear. You had name conflicts in there: the DEFI_FONCTIONs and the DEFI_MATERIAU had the same names. That does not work.
2) placed the 3 materials in one AFFE_MATERIAU, as only one single material field is allowed. I guess the translation in SM is misleading as 'assign material' really means 'assign material field'. I reiterate: only one material field is allowed. I think this was your question as in your first AFFE_CHAR_MECA you only entered your 'pedal' material? Take a look at the AFFE_CHAR_MECA below in the file: at first I assign one material to the whole model. With doing this, I do not miss any group in the model which would lead to an error. All subsequent entries are related to the 2 other groups.
3) commented all LIAISON_MAILS. Why? You already have a compound...I never worked with a compound before, but the LIASIONS do not seem to work here. The cause an exception: <EXCEPTION> <CALCULEL4_55>.
3) I attached weak springs to all your face groups because I still got a singularity in the simulation even if I turned all forces off. This is a sign that something is wrong in the boundary conditions. What I did is not very elegant, but it helps (perhaps it is better to attach them to the outer surface of the housing..). The higher the stiffness of the springs the higher the influence on your calculation.

Hope it helps, I attach the .comm. Everything else is just like in your LHD.hdf,

bye

Mario.

#24 Re: Code_Aster usage » How do I assemble three meshes together into one? » 2020-02-12 21:17:38

mf

Hello,

we do have a version problem here, I confirm that. I use SM2019.3. Therefore, I repost my little example with separated meshes (*.med) and .comm-file. You should be able to rebuild this example from these files.
I took a look into your example: you already use TRACTION, therefore you will already have to use STAT_NON_LINE (your material is already nonlinear). Your meshes seem to be alright, however you have a scaling problem in your meshes (the compound is a 1000times smaller...). As I said, if you're unfamiliar with Aster, try the basics first. Learn walking before trying to run.
But play with my example first. Get it running and see what happens. I wouldn't worry about openMP and openMPI at this moment. It's like learning guitar and thinking about the brand of your strings, it won't make you better on the instrument. Practice the scales first,

bye,

Mario.

#25 Re: Code_Aster usage » How do I assemble three meshes together into one? » 2020-02-04 20:25:03

mf

Hey there,

well this is correct, there is sometimes more than one way to do stuff in Aster. I understand what you want to do. Let me answer chronologically, reading your post:
1.) of course you can assemble more than one mesh in Aster/Asterstudy: for assembling N meshes you have to use (N-1) ASSE_MAILLAGE. This is what I would do now, having read your post. Forget about the compound in Salome for now. This way you are much more flexible in ASTER/ASTERSTUDY.
2.) Forget about your nonlinear material for now. It may introduce problems of its own. I don't know about your nonlinearities, but keep it linear in your first examples (only E and NU). More complexity can be added later, it happens on its own as you progress.
3.) I attached a neat little .hdf, which you can open in Salome-Meca. As we are not allowed to post bigger files, I had to remove the results. It is a model consisting of three cubes on top of each other with three LINEAR materials. I used LIAISON_MAIL to glue them together (coming from the fixed surface FIX in BOX1 which is the master in the first LIAISON, I glued the NODES of the adjacent surface as a slave to it). MAITRE=MASTER, ESCLAVE=SLAVE. Any keyword ending with _MA is a MESH, _NO are NODES. I repeated that with BOX2 and BOX3. If I had glued the whole BOX2 to BOX1, it would have stiffened BOX2 as all nodes have to follow (Try that also, the result would not be 'correct'!)! The applied force is a FORCE_FACE therefore its unit is force/area (N/mm²=MPa in this model). ENCASTRE means 'embedded' or 'built into'. It is the same as setting all DXi and DRXi=0, thus fixing this surface named 'FIX'. MECA_STATIQUE cannot be used with nonlinear materials, you will have to change to STAT_NON_LINE later.
You will have to change the directory of the output-file to your directory (IMPR_RESU at the end). Other changes should not be necessary to run this little model.
4.) Run it (it is small, so no changes necessary there, should only take a few seconds). ncpus=3 because my machine has 6 cores (MUMPS, the solver, is fastest if ncpus = NCORES/2. I have no idea why...)
5.) open the result in ParaVis. Take a look at the created result and the components. With 'generate vectors' and a 'Common-->WarpByVector' you will be able to see the deformation. DEPL=deplacement (displacement), VMIS=VonMisesStress,...Take a look at CREA_CHAMP which keywords (SIEF_XXXX, SIEQ_XXXX, EPSI_XXXX) created these results (for example all SIEQ_XXXX results create the components VMIS, TRESCA, PRIN_1 and so on..). Consider reading the ASTER documentation of the used commands AFFE_CHAR_MECA, MECA_STATIQUE,....

If you are not sure what happens, create small examples (this my test example nr. 15..) that do what you want and take only seconds to calculate. Otherwise it takes too long and increases frustration (been there..).

The book of Jean Pierre Aubry is indeed VERY helpful, read it and try to understand how ASTER works. It has it's special moments and logic, but it is one of the best tools out there. I use it since 7-8 months and I have not been a calculating engineer before (as a materials scientist I had a seminar in abaqus 15 years ago :-) ...if I can do it, anybody can). If you use it the 'right' way, it is VERY powerful. Bear in mind that it is certified for nuclear applications in France, so if you apply the correct commands your results won't differ much.

Apply this example to your model until it works with linear materials (apply the ASTER commands to your meshes and groups..). Then change to nonlinear...you will have to modify the materials, define a few functions for your materials definition and the simulation time in STAT_NON_LINE (DEFI_FONCTION..) and of course change to STAT_NON_LINE.....

You are on a Linux machine, right? If not, change to Linux (my best advice today),

cheers,

Mario.