April 8, 2026 8:28 am

Limitations of AI: incrementalism versus a systems approach?

Published by

An incrementalist brain?

AI’s tend to approach things incrementally – that can be useful, but it hides the danger of loosing sight of the whole. That may be up to the user – a non-trivial burden.

Incrementalism galore

This very website needed a facelift and an update after a decade of service. In this day and age, that means turning to an LLM tool to help with the coding, in this case ChatGPT.

Quite an experience, that is worth a few reflections. Apart from the delight at having an assistant with infinite patience and never an ounce of irritation, the limitations are also quickly apparent and quite concerning.

An initial round of updates yielded a deluge of changes and improvement, steadily improving the site. Delightful. Until at some point every change appeared to have increasing side effects on other pages. A stack of band-aids had led to a mess. It took sitting down with pencil and paper to nail the proper data architecture of the site, and designing a nine-point plan to migrate the whole thing back to an orderly state – albeit with the help of ChatGPT.

Once order had been restored – i needed to insist at every command that changes should not be incremental, but consistent with the overall system design. Faithful as a parrot, ChatGPT repeated that mantra and proceeded to suggest more band-aids anyway.

A systemic approach matters

At the heart of all my work has been to encourage taking a systemic approach to issues — a complex systems approach. This one experience, over many days of programming makes me very concerned that AI will set us back on this journey. It is the criticism of AI that is echoed by every physicist familiar with the neural network techniques that are at the core of the LLM models.

Categorised in:

Comments are closed here.