Atom topic feed | site map | contact | login | Protection des données personnelles | Powered by FluxBB | réalisation artaban

You are not logged in.

- Topics: Active | Unanswered

Pages: **1**

**todd_alan_martin****Member**- Registered: 2008-03-06
- Posts: 131

Hi

I have run two simple problems for a 10mm thick Aluminium mast (DST modelisation) subjected to a uniform axial load.

In the first example the shell consists of one 10mm layer, whereas in the second example the shell consists of ten 1mm layers. Both solutions should be identical. However the time taken to solve and extract the results increases by 5x for the multi-layer shell. This phenomenon worsens, if the plate is divided into even more layers.

The time taken to calculate the strain results on layer ten (4x longer than for layer one) is also alarming. The time for both layers should be equal. Does the solver calculate and discard the results for all the intermediate layers????

The command, geometry and results files are attached. The examples take only a few seconds to run. Can someone look into this please?

Todd.

*Last edited by todd_alan_martin (2011-03-25 05:46:45)*

Offline

**todd_alan_martin****Member**- Registered: 2008-03-06
- Posts: 131

Since no one seems to be looking into this, I have loaded up Eclipse with Photran. Can someone please tell me the Fortan file where the laminate results are being calculated?

Thanks.

*Last edited by todd_alan_martin (2011-01-06 09:40:06)*

Offline

**Thomas DE SOZA****Guru**- From: EDF
- Registered: 2007-11-23
- Posts: 3,066

Hi,

I had a quick look and did a profiling of the test case. The fact that the computation is quite longer for 1 layer vs 10 layers seems to be normal. For each layer one need to recover the material properties and other things which takes time. Code_Aster does not know that all 10 layers in your case are identical.

Regarding the CALC_ELEM taking much more time for the ten layer that does not seem normal at all and we'll look into it.

Thanks for the feedback.

TdS

Offline

**todd_alan_martin****Member**- Registered: 2008-03-06
- Posts: 131

Hi Thomas.

I agree that more time is required to assemble the plate membrane, bending and transverse shear stiffness matrices for 10 layers versus 1 layer, but this should be a fraction of the total computation time. The rest of the computation should proceed at the same rate, regardless of the laminate.

In another more complicated example, I am observing significant, ever increasing computation time as the number of layers increases.

Offline

**todd_alan_martin****Member**- Registered: 2008-03-06
- Posts: 131

Hi

I am raising this issue again, as there has been no response regarding a solution.

I am trying to solve a real world problem with a 40 layer anisotropic laminate and 90k plate elements. The solution time in MECA_STATIQUE is 10mins, but when I tried to calculate element results using CALC_ELEM on the inner and outer surfaces the solver did not finish after 1.5hrs. I am running a dual core T7500 2.2Ghz machine with 2Gb ram. This is a show stopper.

The same problem solves in 10 minutes with my Nastran solver on a Pentium 4 2.4GHz machine with 1Gb ram.

So I created 3 simplified models with 1/10/100 layers in a 10mm thick Aluminium laminate respectively. As you can see the solution time climbs from 3.89s to 538.22s, when it should remain almost constant.

Is the solver trying to integrate through more and more layers as the count increases? Please view the attached models and results files.

With 1 layer

********************************************************************************

* COMMAND : USER : SYSTEM : USER+SYS : ELAPSED *

********************************************************************************

* init (jdc) : 1.64 : 0.10 : 1.74 : 1.85 *

* . compile : 0.01 : 0.00 : 0.01 : 0.00 *

* . exec_compile : 0.14 : 0.00 : 0.14 : 0.15 *

* . report : 0.01 : 0.00 : 0.01 : 0.01 *

* . build : 0.00 : 0.00 : 0.00 : 0.00 *

* DEBUT : 0.03 : 0.05 : 0.08 : 0.63 *

* LIRE_MAILLAGE : 0.04 : 0.01 : 0.05 : 0.66 *

* DEFI_GROUP : 0.02 : 0.00 : 0.02 : 0.03 *

* AFFE_MODELE : 0.01 : 0.01 : 0.02 : 0.19 *

* DEFI_MATERIAU : 0.01 : 0.00 : 0.01 : 0.04 *

* DEFI_COQU_MULT : 0.00 : 0.00 : 0.00 : 0.00 *

* AFFE_MATERIAU : 0.01 : 0.00 : 0.01 : 0.03 *

* AFFE_CARA_ELEM : 0.10 : 0.00 : 0.10 : 0.25 *

* AFFE_CHAR_MECA : 0.12 : 0.00 : 0.12 : 0.22 *

* MECA_STATIQUE : 0.79 : 0.05 : 0.84 : 1.36 *

* CALC_ELEM : 0.31 : 0.00 : 0.31 : 0.32 *

* CALC_ELEM : 0.29 : 0.00 : 0.29 : 0.31 *

* IMPR_RESU : 0.20 : 0.00 : 0.20 : 0.20 *

* FIN : 0.02 : 0.05 : 0.07 : 0.09 *

* . part Superviseur : 1.68 : 0.16 : 1.84 : 2.64 *

* . part Fortran : 1.93 : 0.12 : 2.05 : 3.67 *

********************************************************************************

* TOTAL_JOB : 3.61 : 0.28 : 3.89 : 6.34 *

********************************************************************************

with 10 layers

********************************************************************************

* COMMAND : USER : SYSTEM : USER+SYS : ELAPSED *

********************************************************************************

* init (jdc) : 1.69 : 0.11 : 1.80 : 1.94 *

* . compile : 0.01 : 0.00 : 0.01 : 0.01 *

* . exec_compile : 0.14 : 0.00 : 0.14 : 0.15 *

* . report : 0.02 : 0.00 : 0.02 : 0.01 *

* . build : 0.00 : 0.00 : 0.00 : 0.00 *

* DEBUT : 0.01 : 0.03 : 0.04 : 0.06 *

* LIRE_MAILLAGE : 0.02 : 0.01 : 0.03 : 0.02 *

* DEFI_GROUP : 0.01 : 0.00 : 0.01 : 0.01 *

* AFFE_MODELE : 0.00 : 0.00 : 0.00 : 0.03 *

* DEFI_MATERIAU : 0.00 : 0.00 : 0.00 : 0.00 *

* DEFI_COQU_MULT : 0.00 : 0.00 : 0.00 : 0.00 *

* AFFE_MATERIAU : 0.00 : 0.00 : 0.00 : 0.01 *

* AFFE_CARA_ELEM : 0.06 : 0.00 : 0.06 : 0.07 *

* AFFE_CHAR_MECA : 0.10 : 0.00 : 0.10 : 0.11 *

* MECA_STATIQUE : 3.34 : 0.04 : 3.38 : 3.52 *

* CALC_ELEM : 1.48 : 0.00 : 1.48 : 1.52 *

* CALC_ELEM : 5.52 : 0.01 : 5.53 : 5.54 *

* IMPR_RESU : 0.20 : 0.01 : 0.21 : 0.20 *

* FIN : 0.02 : 0.06 : 0.08 : 0.13 *

* . part Superviseur : 1.71 : 0.14 : 1.85 : 2.01 *

* . part Fortran : 10.76 : 0.13 : 10.89 : 11.16 *

********************************************************************************

* TOTAL_JOB : 12.48 : 0.27 : 12.75 : 13.17 *

********************************************************************************

with 100 layers

********************************************************************************

* COMMAND : USER : SYSTEM : USER+SYS : ELAPSED *

********************************************************************************

* init (jdc) : 1.65 : 0.12 : 1.77 : 1.77 *

* . compile : 0.01 : 0.00 : 0.01 : 0.01 *

* . exec_compile : 0.15 : 0.01 : 0.16 : 0.17 *

* . report : 0.04 : 0.00 : 0.04 : 0.03 *

* . build : 0.00 : 0.00 : 0.00 : 0.00 *

* DEBUT : 0.02 : 0.03 : 0.05 : 0.05 *

* LIRE_MAILLAGE : 0.02 : 0.01 : 0.03 : 0.02 *

* DEFI_GROUP : 0.01 : 0.00 : 0.01 : 0.01 *

* AFFE_MODELE : 0.00 : 0.00 : 0.00 : 0.01 *

* DEFI_MATERIAU : 0.01 : 0.00 : 0.01 : 0.00 *

* DEFI_COQU_MULT : 0.02 : 0.00 : 0.02 : 0.03 *

* AFFE_MATERIAU : 0.01 : 0.00 : 0.01 : 0.00 *

* AFFE_CARA_ELEM : 0.06 : 0.00 : 0.06 : 0.07 *

* AFFE_CHAR_MECA : 0.11 : 0.00 : 0.11 : 0.10 *

* MECA_STATIQUE : 86.79 : 0.18 : 86.97 : 88.05 *

* CALC_ELEM : 13.53 : 0.04 : 13.57 : 13.77 *

* CALC_ELEM : 434.63 : 0.50 : 435.13 : 441.19 *

* IMPR_RESU : 0.23 : 0.01 : 0.24 : 1.14 *

* FIN : 0.06 : 0.18 : 0.24 : 0.88 *

* . part Superviseur : 1.68 : 0.15 : 1.83 : 1.84 *

* . part Fortran : 535.47 : 0.92 : 536.39 : 545.33 *

********************************************************************************

* TOTAL_JOB : 537.15 : 1.07 : 538.22 : 547.17 *

********************************************************************************

Offline

**Thomas DE SOZA****Guru**- From: EDF
- Registered: 2007-11-23
- Posts: 3,066

Hi Todd,

Thanks a lot for the cases it'll help us examine the problem.

I did have a look to the problem a while ago and it seems indeed that for each requested layer the integration takes place from the bottom layer to the current one so the total number of equation should be equal to N*(N+1)/2 ... Still have to understand why.

We'll look into it ASAP but it is very unlikely to make it into stable version 10.4 since fixing the problem with offset is priority number one.

TdS

Offline

**Thomas DE SOZA****Guru**- From: EDF
- Registered: 2007-11-23
- Posts: 3,066

Hi again,

We should be able to accelerate the solution time for the options other than "SIEF_ELGA". Indeed this option which is automatically computed when solving with MECA_STATIQUE, will compute the stress tensors at each layer and for each layer at each integration point (inf, sup, middle). Note that this can be skipped by adding OPTION='SANS' to MECA_STATIQUE.

Moreover the shear stress in a layer must be computed with an integral from the bottom layer to that particular layer (in order to verify the free border condition) see for example R4.01.01 pp15-17. This means that for N layers we will be doing operations in the N^2 complexity (explaining the dramatic increase in computation time with the number of layers).

However since only stress require this special calculation we will skip it for strain and options such as EPSI_* will take much less time in the future !

TdS

Offline

**todd_alan_martin****Member**- Registered: 2008-03-06
- Posts: 131

Hi Thomas

Thanks for the information.

It seems to me, since the shear stress is zero on both free surfaces, the most expensive calculation should be for the middle layers, as integration can be started from either the top or the bottom surface. The current method of always starting the integration from layer 1 makes the calculation of the first layer very fast and the last layer very slow. Surely this can be fairly easily fixed by starting the integration at the free surface closest to the requested layer. In this way speed of calculation layer 1 = layer N, layer 2 = layer N-1 etc.

The integration of the through thickness shear stress, layer by layer, is a useful feature for ply "strength" calculation. (My Nastran solver just calculates a weighted shear force resultant, corresponding to a constant strain, and divides it evenly across the layers.) However, most of the time I am not interested in observing the stresses/strains in each layer. Instead I essentially want to model a homogenous anisotropic plate and observe the strains at the free surfaces. I am wondering whether ELAS_COQUE could be used for this. See this link http://www.code-aster.org/forum2/viewtopic.php?id=13674

Thanks,

Todd.

*Last edited by todd_alan_martin (2011-04-08 05:18:02)*

Offline

**Thomas DE SOZA****Guru**- From: EDF
- Registered: 2007-11-23
- Posts: 3,066

todd_alan_martin wrote:

It seems to me, since the shear stress is zero on both free surfaces, the most expensive calculation should be for the middle layers, as integration can be started from either the top or the bottom surface. The current method of always starting the integration from layer 1 makes the calculation of the first layer very fast and the last layer very slow. Surely this can be fairly easily fixed by starting the integration at the free surface closest to the requested layer. In this way speed of calculation layer 1 = layer N, layer 2 = layer N-1 etc.

Interesting thought. It may at least reduce the complexity by 4 ((N/2)² complexity), still the computation of SIEF_ELGA will cost a lot.

todd_alan_martin wrote:

The integration of the through thickness shear stress, layer by layer, is a useful feature for ply "strength" calculation. (My Nastran solver just calculates a weighted shear force resultant, corresponding to a constant strain, and divides it evenly across the layers.)

If you are sometimes interested in the stress on a particular layer, you should disable SIEF_ELGA computing by setting OPTION='SANS' in MECA_STATIQUE and compute the stress tensor with CALC_ELEM/OPTION='SIGM_ELNO' and use REPE_COQUE as you did for the EPSI_ELNO. The advantage is that you select a particular layer and a particular integration point in the thickness : the cost will be greatly reduced compared to SIEF_ELGA where everything is computed. However the top layer will still have a higher cost compared to the bottom layer.

Note that SIEF_ELGA and SIGM_ELNO are exactly the same quantities but SIEF_ELGA is calculated at Gauss points where SIGM_ELNO is calculated at the nodes (but I assume this is not a problem for you).

Just a little satisfaction for us : it is true that the solver is unreasonably slow (as you put it) when it comes to composite and even though the situation will improve greatly it will remain costly for hundred layers composites **but** we're outputting real shear stress in the thickness and not a shear force resultant like Nastran ;-)

todd_alan_martin wrote:

However, most of the time I am not interested in observing the stresses/strains in each layer. Instead I essentially want to model a homogenous anisotropic plate and observe the strains at the free surfaces. I am wondering whether ELAS_COQUE could be used for this. See this link http://www.code-aster.org/forum2/viewtopic.php?id=13674

Once the little performance problem with EPSI_ELNO is solved you should then be all set. I don't think ELAS_COQUE is the answer for you since :

- you must compute yourself the homogeneous material data you enter in ELAS_COQUE (which can be cumbersome particularly as the thickness intervenes there as you pointed out in the other thread) : it is the job of DEFI_COQU_MULT to do that, computing the homogeneous behavior for a multi layer sheet.

- even if you plan to model a composite using multiple ELAS_COQUE material (one for each layer with its own offset) : the shear stress won't be computed correctly as noted in the other thread regarding offset.

TdS

Offline

**Thomas DE SOZA****Guru**- From: EDF
- Registered: 2007-11-23
- Posts: 3,066

As of 11.0.2 version, performance for any option apart from the SIEF_ELGA one should be the same no matter the layer on which it is computed.

TdS

*Last edited by Thomas DE SOZA (2011-04-26 13:16:14)*

Offline

**todd_alan_martin****Member**- Registered: 2008-03-06
- Posts: 131

Hi Thomas

Not that I've had time to read R3.07.03 again, I have some suggestions.

In the case of the DKT models, a solution is obtained for zero through-thickness shear strain, so anyone running such a model should not be interested in through-thickness shear stresses. It seems logical to me that calculating an average shear stress, by dividing the resultant (equilibrium) shear force by the laminate thickness would suffice in this case, since the computational expense of performing a numerical integration across multiple layers is too great.

When one is interested in through-thickness shear stresses, one should run a DST model. However, I believe a numerical integration is also unnecessary here, as an analytical integration has already been performed. I hope I can explain my argument clearly in this post, without the detailed mathematical equations.

Referring to the Annexe 2 in R3.07.03 we see that the assumption is made that Hct = C11(-1) and that for a non-homegenous (possibly non-symmetric) laminate C11 can be calculated by a summation. This summation represents an analytical layer by later integration of the D1 equation. Therefore the through-thickness shear stress in each ply can be calculated directly from the shear force resultant, T, if the D1 value is retained at each layer. In other words, an analytical integration is performed for each laminate (but not each element) during the construction of the Hct matrix early on in the solution process. I suggest storing these D1 values, possibly 3 (bottom,middle,top), for each layer and using them to calculate the shear stress in each layer after generating a solution. This should be very fast. It would also be valid with or without plate offset, unless the model equilibrium is being invalidated by plate offset. I sincerely hope that is not the case.

Considering the example problem I attached previously for an isotropic plate with 100 layers, the Hct=C11(-1) assumption is valid and the solution time would not be much greater than that required for the single layer plate.

Of course, the assumption Hct=C11(-1) is not always valid, but a significant difference between the calculated and expected shear energy would require a re-solve anyway, where Hct and D2 are calculated element by element. This could be an optional, albeit slow, extra step. In the abscence of this, the solver could simply emit a warning, if the calculated shear enegy differs from that expected by a "user-defined" percentage. So, when there is no warning, there would be no need to calculate D2 and the results would be accurate enough with no need for numerical integration. Does this make sense?

*Last edited by todd_alan_martin (2011-04-27 02:14:21)*

Offline

**todd_alan_martin****Member**- Registered: 2008-03-06
- Posts: 131

Upon further consideration, D2 could also be calculated and stored for each laminate at the same time as D1. It would provide a fast way to calculate Hct and emit a warning for elements with incorrect shear energy. Of course, when D2=0 no checks would be required.

Offline

**Thomas DE SOZA****Guru**- From: EDF
- Registered: 2007-11-23
- Posts: 3,066

Hi,

todd_alan_martin wrote:

Referring to the Annexe 2 in R3.07.03 we see that the assumption is made that Hct = C11(-1) and that for a non-homegenous (possibly non-symmetric) laminate C11 can be calculated by a summation. This summation represents an analytical layer by later integration of the D1 equation. Therefore the through-thickness shear stress in each ply can be calculated directly from the shear force resultant, T, if the D1 value is retained at each layer. In other words, an analytical integration is performed for each laminate (but not each element) during the construction of the Hct matrix early on in the solution process. I suggest storing these D1 values, possibly 3 (bottom,middle,top), for each layer and using them to calculate the shear stress in each layer after generating a solution. This should be very fast. It would also be valid with or without plate offset, unless the model equilibrium is being invalidated by plate offset. I sincerely hope that is not the case.

todd_alan_martin wrote:

Upon further consideration, D2 could also be calculated and stored for each laminate at the same time as D1. It would provide a fast way to calculate Hct and emit a warning for elements with incorrect shear energy. Of course, when D2=0 no checks would be required.

I have to admit your reasoning makes sense. We don't do this right now (we only store the coefficients needed to recompute D1 and D2, see dxdmul.f which called by XXXsie.f where XXX={dkt,dst,dkq,dsq,q4g} for each layer). dxdmul.f is the subroutine that explains the overhead when computing stress for multi-layer plates. Doing it will indeed increase solution time though it would cost extra storage (and multi-layer plates already require a complicated data structure).

It also seems like we don't do this because of the local axis for each layer that might not be the same for each element (see the T1VE argument in dxdmul.f).

I'm however not skilled enough with multi-layer plates to tell more.

Maybe if time allows we'll look into this in the future but composite shells are far from our priority (they don't work for non-linear analysis also).

TdS

Offline

**todd_alan_martin****Member**- Registered: 2008-03-06
- Posts: 131

Hi Thomas

Thanks for the subroutine information. I will have a look at it, when I get a chance.

Todd.

Offline

Pages: **1**