AI integration in parametric design

Hello, I'm currently working on my thesis for my Bachelor Architecture and Construction Engineering. I'm doing a research about how parametrisch design and the integration of AI in this process can help optimize structural design. For this I have challenged myself to parametrically design a steel structure for an industrial hall using some sort of AI integration, and creating a design using less kilogramms of steel than the traditionally made design by a structural engineer.
During my research I found the applications of Dlubal, and they seem to be promising, and could help me during my research. I'm new to RFEM and RSTAB, but I want to know if it is possible to set up a structural (steel) design using these applications (Like filling in a list of parameters such as length, width, height, etc. of a building and the applications draws up a model)? Also, is there already some sort of AI integration in these applications next to the AI assistant Mia?
Thanks in advance!

Best regards,
Arnoud

Hi Arnoud, welcome to our community. The topic of your bachelor’s thesis sounds very interesting.

First things first: You can’t use Mia for this task just yet. She currently doesn’t have access to RFEM. We’re working on it, but it’s not clear yet when that will be possible.

A promising starting point would be the MCP server, which RFEM has included for several versions now.

https://www.dlubal.com/en/support-and-learning/support/knowledge-base/002032

This allows you to set up your own AI agent, which can then interact with RFEM via MCP.

To test the MCP server, I used the Codex plugin from OpenAI for Visual Studio Code. I used the GPT-5.3-Codex model. That worked quite well. However, it is possible to use any other client that supports MCP.

I performed exactly the kind of optimization task you’re planning during my test. However, I used a two-span steel beam. An whole hall is a much different challenge.

I’ll share a bit about my experiences.

During my tests, I found that the computation times were already significant even for such a small system. By that, I don’t mean RFEM’s computation times, but Codex’s response times. There were certainly some opportunities for optimization in the prompting, but it doesn’t go very fast.

Another potential issue could be the size of the context window. The Codex plugin displays how large the context window is and how much of it is in use. I managed to exceed the context window size several times. When that happens, the client just starts acting erratically. However, I did more than just optimize. Nevertheless, you should use tokens sparingly, and your LLM should have a large context window. You can save tokens by using English as the language for the prompt instead of German, as I did. It might make sense to split the task into several independent sessions.

The LLM’s behavior wasn’t always reproducible. With exactly the same prompt, it could happen that sometimes everything worked very well, but other times it just produced nonsense.

I’d like to give you two more tips regarding RFEM:

RFEM 6 has an API. This allows you to access most of RFEM’s functions using your own program (Python, C#). Further information:

https://apidocs.dlubal.com/

In addition, there is the Model Optimization add-on for RFEM, which allows you to optimize your model using the particle swarm optimization method.

https://www.dlubal.com/en/products/rfem-fea-software/add-ons-for-rfem-6/special-solutions/model-optimization

I wish you every success with your work. Perhaps you can share your experiences here later.

Best regards,
Frank

1 Like