The simple answer is “it depends” since each approach has pros and cons depending on your organizational structure and visibility/control of those systems.
Here are some common approaches as well as things to consider:
Separate integrations based on sensitivity of information residing on the servers
For example, each server that houses PI, financial, or other highly sensitive information will have a unique integration. All systems in the medium risk bucket would share the same integration and all systems in the low risk bucket would share the same integration.
This approach has some overhead, especially with the high risk systems. It also requires administrators to know exactly where all that wonderful sensitive data resides, which is not always the case, especially in fragmented environments.
Database servers, web servers, management workstations, etc. This is a very common approach and for most cases, easy to put systems into a specific role bucket.
Another common approach, especially in EDUs. Works very well if you can delegate the task of spinning up new integrations to an application administrator within each department.
Separate by department AND require systems housing sensitive data within each department to use different integrations.
We even have some customers that provision new integrations exclusively through API calls. They then push the installer to the target system with the appropriate keys included, making it even easier for them to spin up and manage new integrations.
No matter what method you choose, creating a standard application naming convention and sticking to it will help ease the administrative burden and correlate logging, especially if you have lots of departments and systems to manage.
I hope this helps answer your question and gives you some food for thought.