Export/Import with PostgreSQL Tools

The regular scripts export.js and import.js are very convenient, however the disadvantage is that they run much slower than native database tools like PostgreSQL's pg_dump/pg_restore. So many admins prefer to use the PostgreSQL tools which is fine especially for 1:1 backups.

Backing up a BPS schema for security can easily be done with PostgreSQL tools by exporting either the full database with pg_dumpall or only the schema with pg_dump. Restoring into a database where none of the schema or bps roles or users are yet present is straight forward. In case of the schema-only export however there are some more actions required: Create the database, tablespaces and the roles schema_usr and schema_gst before import, and recreate the database users after import in the users app of the BPS GUI.

Anyway there are special cases that need many more actions, for example reorganizing because something in is broken (indexes, constraints, triggers, functions…). See BPS manual for details to take care of.

This is where the scripts bpspgexp.cmd and bpspgimp.cmd come into the game, because they handle most everything required for a clean import or reorg.

bpspgexp.cmd exports only vital table data and sequences. The perfect time for exporting a production schema is when nobody else is working on it. Nevertheless the export with pulls a consistent copy of the time the export was started.

For import with bpspgimp.cmd, a BPS schema needs to be present on the target postgresql server. Only the table data are then imported into that schema, and the current sequences are recreated. This way, all other objects, grants etc are left in a clean state and you have a perfect BPS datastore.

After importing with bpspgimp.cmd, the only step left is to recreate the BPS database users in the users app of the BPS GUI.

The scripts must run with the credentials of postgres, or an other user with postgresql superuser privileges.

Open a terminal window, create and change to a working directory

mkdir C:\dev\bpspgdmp
cd /d C:\dev\bpspgdmp 

Check that psql, pg_dump and bpspgexp are found in the path

psql -V
pg_dump -V

Note that the version of pg_dump must be same or newer than the database.

Start script

bpspgexp postgres:********@vmpg11/bpsdb lu_agrar

Sample output

C:\dev\bpspgdmp>bpspgexp postgres:********@vmpg11/bpsdb lu_agrar
Exporting to lu_agrar.dmp
Script bpspgexp completed.


In the working directory you can see lu_agrar.dmp file created by pg_dump

 Datenträger in Laufwerk C: ist System
 Volumeseriennummer: 7A49-8313

 Verzeichnis von C:\dev\bpspgdmp

11.10.2019  12:04    <DIR>          .
11.10.2019  12:04    <DIR>          ..
11.10.2019  11:23           954'422 lu_agrar.dmp
               1 Datei(en),        954'422 Bytes
               2 Verzeichnis(se), 48'421'363'712 Bytes frei


In our example we have the file os_zvm.dmp with the export of schema os_zvm from an other db server. We want to import the schema on our target server vmpg12.

  • Create a working directory from where you will run bpspgimp.cmd.
  • Copy the file os_zvm.dmp into the working directory.
  • Check/prepare the schema of os_zvm on server vmpg12:
    • In case it does not exist, create it with the «BPS Datastore Setup» assistant. (For details see BPS manual)
    • In case it already exists but is an old version, update it with the «BPS Datastore Update» assistant.
    • In case it already exists but may be broken, delete it with the «BPS Datastore Drop» assistant, and then re-create it with the «BPS Datastore Setup» assistant.
  • In any case, make shure the target schema has the same version as the source schema, otherwise the import may fail. (Check in Central System Settings/Install/DbLevel)
  • Open a terminal window and do as follows

Change to the working directory

cd /d C:\dev\bpspgdmp

Check that psql, pg_restore and bpspgimp are found

psql -V
pg_restore -V

Start script

bpspgimp postgres:******@vmpg12/postgres os_zvm

Sample script output

  • howto/pgdump.txt
  • Last modified: 2019/10/15 12:10
  • by ibk