Archived OpenModelica forums. Posting is disabled.

Alternative forums include GitHub discussions or StackOverflow (make sure to read the Stack Overflow rules; you need to have well-formed questions)


Forgot password? | Forgot username? | Register

Efficient FMU Export

Efficient FMU Export

Hello,

I am using a FMU created with Openmodelica for Reinforcement Learning in Python, using the modelicagym environment. Everything works, but the bottleneck regarding the simulation time seems to be the "opening" process of the FMU while starting the simulation. In RL, this needs to be done after every time-step.  My hope is to improve the speed of this process by reducing the size of the FMU. Even a small RLC-Circuit with some voltage and current measurement (size: 9 kilobyte) has after the FMU Export (FMI 2.0, Model Exchange) 1323 kilobytes.

Are there some ways like unloading unused libraries (not sure if they are exported, too) to reduce this size? Like unloading unused libraries? Or does anyone have any other ideas how to speed up this process? Any idea would be helpful :-)

Best regards

RLC-Network.mo

Edited by: Mota - Jan-17-20 09:46:23

Re: Efficient FMU Export

You shouldn't need to load the FMU dlls (or so) at every step. You could just use the same instance and run different simulations with if you want as they are thread safe. You can fmiReset it if is needed.

There are several targets for the FMU, you can see in OMEdit->Tools->Options->FMI
- static, all stuff is linked in, you can send the FMU to somebody and it will work
- dynamic, mostly no stuff linked in, the person you send the FMU to needs an OpenModelica installation and the libraries from it in the PATH (or LD_LIBRARY_PATH).
Most likely dynamic is smaller but it has the mentioned restrictions.

If your tool supports ME then you could also use the CPP runtime (OMEdit->Tools->Options->Simulation->choose "Cpp"). That would probably make a smaller FMU, but it might not work for all the models for which the C runtime work.

Re: Efficient FMU Export

Thank you for your Answer.

Yes, you are right, the model is only loaded once, in the beginning of the program. Probably i didn´t formulate it in a clear way.

In Reinforcement Learning, I have to run the same simulation several thousand times, to get proper results. And after each step, I have go stop the simulation to get transfer the results to my RL-Agent, making the decision for the next action. So for one run, i will use about a million times "model.simulate(...)". Having a smaller model should decrease the simulation time, especially with the huge number of iterations.

Using the dynamic model doesn´t work while simulating the FMU in Python, but that was a good hint.

Something like "Tidying up" the the FMU would be awesome. Basically, there are just 250 ODEs to be solved, where by far the most of them are just trivial equations, so getting rid of the most of the unused overhead like many (standard) libraries would be awesome, just like any other way of reducing the size and complexity of the FMU :-)

There are 0 guests and 0 other users also viewing this topic