- Index
- » Users
- » marcelot
- » Profile
Posts
Posts
Dear All,
I am using omsimulator for co-simulating a FMU with a model I programmed in Python. There is a need for me to iterate both models for computing the results for a single time step. I am using the command stepUntil with the same time stamp, which is apparently working.
But based on the FMI specification 2, I was wondering if I could use exportSnapshot and importSnapshot for mimicking the getFMIState and setFMIState behavior.
What would be the best practice in the situation I am in?
I would really appreciate any feedback on this topic.
Thanks in advance for the attention.
Hello,
I am interested in this too.
So far, I have been able to get directional derivatives only in Model Exchange mode with the fmiFlags '-d=newInst,-disableDirectionalDerivatives'. For the co-simulation model, however, the directional derivatives are not available. Is it really how it is supposed to be?
I'd really appreciate some help on this too.
Hi all,
I have been testing a simulation of two coupled FMUs using OMSimulator in Python. For this I followed quite closely the OMSimulator documentation. So far the simulations are working properly. Could anyone tell me where I can find more information on the master algorithms solver_sc_cvode, solve_wc_ma, solve_wc_mav or solve_wc_mav2? On the top of that, I would like to know more with respect to the system types (system_sc or system_wc).
I do really appreciate your help, AnHeuermann. I will open the ticket as you recommended.
Thank you, AnHeuermann, for your prompt reply.
Would you have any further advice on how to circumvent the problem I am facing? I could not find much information in PyFMI documentation either.
Hello Everyone,
I am working with a co-simulation and I am experiencing some difficulties. For this we are using the following procedure.
We firstly generate the system FMU with OMPython in co-simulation mode. For this we choose the CVODE solver using the compiler option "--fmiFlags=s:cvode,nls:hybrid". For the co-simulation per se, we are using the PyFMI module for loading, initializing and solving the system. At first, the system initialization would always fail. When we generated the FMU with the extra compiler option "--homotopyApproach=adaptiveGlobal", the system finally got initialized successfully. However, during the simulation, the solver would fail with the following error:
[CVODE ERROR] CVode tout too close to t0 to start integration.
Here are the messages thrown by the solver at the beginning of the simulation:
LOG_SOLVER | info | CVODE linear multistep method CV_BDF
LOG_SOLVER | info | CVODE maximum integration order CV_ITER_NEWTON
LOG_SOLVER | info | CVODE use equidistant time grid YES
LOG_SOLVER | info | CVODE Using relative error tolerance 1.000000e-06
LOG_SOLVER | info | CVODE Using dense internal linear solver SUNLinSol_Dense.
LOG_SOLVER | info | CVODE Use internal dense numeric jacobian method.
LOG_SOLVER | info | CVODE uses internal root finding method NO
LOG_SOLVER | info | CVODE maximum absolut step size 0
LOG_SOLVER | info | CVODE initial step size is set automatically
LOG_SOLVER | info | CVODE maximum integration order 5
LOG_SOLVER | info | CVODE maximum number of nonlinear convergence failures permitted during one step 10
LOG_SOLVER | info | CVODE BDF stability limit detection algorithm OFF
So, is there a way to pass additional arguments to CVODE solver at the moment we generate the FMU? For example, an argument for enabling the internal root finding method?
I would be deeply grateful if anyone could shed some light on this.
- Index
- » Users
- » marcelot
- » Profile