PG_RESTORE – Backup and Restore – Parameters

These are the parameters the PG_RESTORE takes there is no strict required order for the parameters. However there are some best practices and dependencies to keep in mind I list these at the end of this post

General Options

-d, --dbname=DBNAME
Specifies the database to restore into. If omitted, pg_restore outputs SQL to stdout.

-h, --host=HOSTNAME
Specifies the database server hostname or IP. Defaults to a local connection.

-p, --port=PORT
Specifies the database server port. Defaults to 5432.

-U, --username=USERNAME
Specifies the username for authentication.

-f, --file=FILENAME
Specifies the ouput file or directory where the dump or restore should be saved or read from

-w, --no-password
Prevents password prompt. Used when authentication does not require a password.

-W, --password
Forces a password prompt. Useful when the password is not stored in .pgpass.

Example Command

pg_restore -h prod-server-01 -p 5432 -U username -d db003target "c:/dump/2025db003_backup.dump"

Format Options

-F, --format=c|d|t|p
Specifies the archive format:

c → Custom format (from pg_dump -Fc)

d → Directory format (from pg_dump -Fd)

t → Tar format (from pg_dump -Ft)

p → Plain SQL format (not applicable to pg_restore)

Restore Control Options

-a, –data-only
Restores only data (excludes schema). Useful when schema already exists.

-c, –clean
Drops database objects before recreating them. Prevents conflicts with existing objects.

-C, –create
Includes a CREATE DATABASE command. The database will be recreated before restoring.

-I, –index=NAME
Restores only a specific index. Useful for partial restores.

-j, –jobs=NUM
Uses multiple jobs (parallel restore). Speeds up restore for -Fd format dumps.

-L, –use-list=FILE
Restores only objects listed in FILE. The list can be generated using pg_restore -l.

-n, --schema=NAME
Restores only a specific schema. Useful for restoring part of a dump.

-P, --function=NAME(args)
Restores only a specific function.

-s, --schema-only
Restores only the schema (no data). Useful for setting up database structure.

-S, --superuser=USERNAME
Specifies a superuser for certain restore operations.

-t, --table=NAME
Restores only a specific table. Useful for partial restores.

-T, --trigger=NAME
Restores only a specific trigger.

Output Control Options

-e, --exit-on-error
Stops on the first error. Useful for debugging issues.

-v, --verbose
Provides detailed output. Helps with troubleshooting.

--no-owner
Does not restore ownership metadata. Useful when restoring as a different user.

--no-privileges, --no-acl
Does not restore GRANT/REVOKE statements.

--disable-triggers
Disables triggers on tables during restore. Useful for bulk data loading.

--if-exists
Includes IF EXISTS in DROP statements. Prevents errors if objects don’t exist.

--strict-names
Requires exact match of table/schema names. Prevents unintended restores.

Help & Version

-?
Displays help information.

--version
Shows the version of pg_restore.

General Rules of IMPLEMENTATION

These are the parameters the PG_DUMP takes there is no strict required order for the parameters. However there are some best practices and dependencies to keep in mind

Options Can Be Placed in Any Order

pg_dump -U postgres -h localhost -d mydb -F c -f backup.dump

pg_dump -F c -f backup.dump -U postgres -d mydb -h localhost

Both commands are valid.

Options That Take Arguments Must Be Followed by Their Values

✅ pg_dump -U postgres -h localhost -d mydb -F c -f backup.dump

❌ pg_dump -U -h localhost postgres -d mydb -F c -f backup.dump (Invalid because -U needs a value immediately after)

Multiple Parameters Can Be Combined (if they don’t need arguments)

✅ pg_dump -U postgres -v -F c -f backup.dump

✅ pg_dump -U postgres -F c -v -f backup.dump

❌ pg_dump -F -U postgres c -f backup.dump (Invalid because -F needs an argument immediately after)

For Readability, It’s Best to Keep Related Options Together

Authentication options (-U, -h, -p, -W) should be placed together.

Dump/restore format options (-F, -j) should be grouped logically.

Filtering options (-n, -t, -T) should be near each other.

Specific Cases to Watch Out For

1. pg_restore with -d (Database Restore)

If using pg_restore -d mydb, you must not use -C (create database) because the restore is running inside an existing database.

If using -C, omit -d and restore into the default postgres database first: pg_restore -C -d postgres backup.dump

2. pg_dump Parallel Jobs (-j)

-j (parallel jobs) only works with directory format (-F d).

Example: pg_dump -F d -j 4 -f my_backup_dir -U postgres -d mydb

If -j is used with a non-directory format, pg_dump will fail.

3. pg_restore and -j (Parallel Restore)

-j (parallel restore) is only valid for directory format dumps (-F d).

Example:pg_restore -j 4 -d mydb my_backup_dir

Using -j with -F c (custom format) does not work unless restoring only selected tables.

Best Practices

✅ Authentication options first (-U, -h, -p)
✅ Output control next (-F, -f)
✅ Content filtering later (-s, -a, -t, -n)
✅ Parallelization at the end (-j)

PG_DUMP – Backup and Restore – Parameters

These are the parameters the PG_DUMP takes there is no strict required order for the parameters. However there are some best practices and dependencies to keep in mind which is at the end

General Options

-d, --dbname=DBNAME
Specifies the database to restore into. If omitted, pg_restore outputs SQL to stdout.

-h, --host=HOSTNAME
Specifies the database server hostname or IP. Defaults to a local connection.

-p, --port=PORT
Specifies the database server port. Defaults to 5432.

-U, --username=USERNAME
Specifies the username for authentication.

-f, --file=FILENAME
Specifies the ouput file or directory where the dump or restore should be saved or read from

-w, --no-password
Prevents password prompt. Used when authentication does not require a password.

-W, --password
Forces a password prompt. Useful when the password is not stored in .pgpass.

Example below

pg_dump -h prod-server-01 -p 5432 -U username -d db003 -F c -f "c:/dump/2025db003_backup.dump"

Format Options

-F, --format=c|d|t|p
Specifies the archive format:

c → Custom format (from pg_dump -Fc)

d → Directory format (from pg_dump -Fd)

t → Tar format (from pg_dump -Ft)

p → Plain SQL format (not applicable to pg_restore)

Restore Control Options

-a, --data-only
Restores only data (excludes schema). Useful when schema already exists.

-c, --clean
Drops database objects before recreating them. Prevents conflicts with existing objects.

-C, --create
Includes a CREATE DATABASE command. The database will be recreated before restoring.

-I, --index=NAME
Restores only a specific index. Useful for partial restores.

-j, --jobs=NUM
Uses multiple jobs (parallel restore). Speeds up restore for -Fd format dumps.

-L, --use-list=FILE
Restores only objects listed in FILE. The list can be generated using pg_restore -l.

-n, --schema=NAME
Restores only a specific schema. Useful for restoring part of a dump.

-P, --function=NAME(args)
Restores only a specific function.

-s, --schema-only
Restores only the schema (no data). Useful for setting up database structure.

-S, --superuser=USERNAME
Specifies a superuser for certain restore operations.

-t, --table=NAME
Restores only a specific table. Useful for partial restores.

-T, --trigger=NAME
Restores only a specific trigger.

Output Control Options

-e, --exit-on-error
Stops on the first error. Useful for debugging issues.

-v, --verbose
Provides detailed output. Helps with troubleshooting.

--no-owner
Does not restore ownership metadata. Useful when restoring as a different user.

--no-privileges, --no-acl
Does not restore GRANT/REVOKE statements.

--disable-triggers
Disables triggers on tables during restore. Useful for bulk data loading.

--if-exists
Includes IF EXISTS in DROP statements. Prevents errors if objects don’t exist.

--strict-names
Requires exact match of table/schema names. Prevents unintended restores.

Help & Version

-?
Displays help information.

--version
Shows the version of pg_restore.

General Rules of IMPLEMENTATION

These are the parameters the PG_DUMP takes there is no strict required order for the parameters. However there are some best practices and dependencies to keep in mind

Options Can Be Placed in Any Order

pg_dump -U postgres -h localhost -d mydb -F c -f backup.dump

pg_dump -F c -f backup.dump -U postgres -d mydb -h localhost

Both commands are valid.

Options That Take Arguments Must Be Followed by Their Values

✅ pg_dump -U postgres -h localhost -d mydb -F c -f backup.dump

❌ pg_dump -U -h localhost postgres -d mydb -F c -f backup.dump (Invalid because -U needs a value immediately after)

Multiple Parameters Can Be Combined (if they don’t need arguments)

✅ pg_dump -U postgres -v -F c -f backup.dump

✅ pg_dump -U postgres -F c -v -f backup.dump

❌ pg_dump -F -U postgres c -f backup.dump (Invalid because -F needs an argument immediately after)

For Readability, It’s Best to Keep Related Options Together

Authentication options (-U, -h, -p, -W) should be placed together.

Dump/restore format options (-F, -j) should be grouped logically.

Filtering options (-n, -t, -T) should be near each other.

Specific Cases to Watch Out For

1. pg_restore with -d (Database Restore)

If using pg_restore -d mydb, you must not use -C (create database) because the restore is running inside an existing database.

If using -C, omit -d and restore into the default postgres database first: pg_restore -C -d postgres backup.dump

2. pg_dump Parallel Jobs (-j)

-j (parallel jobs) only works with directory format (-F d).

Example: pg_dump -F d -j 4 -f my_backup_dir -U postgres -d mydb

If -j is used with a non-directory format, pg_dump will fail.

3. pg_restore and -j (Parallel Restore)

-j (parallel restore) is only valid for directory format dumps (-F d).

Example: pg_restore -j 4 -d mydb my_backup_dir

Using -j with -F c (custom format) does not work unless restoring only selected tables.

Best Practices

✅ Authentication options first (-U, -h, -p)
✅ Output control next (-F, -f)
✅ Content filtering later (-s, -a, -t, -n)
✅ Parallelization at the end (-j)

Postgres – Summary Check of Size No of Tables Views and Rows

It is a good idea to do a general reconcilliation before and after a database dump and restore to peform some sense check as to whether restoration has completed successfully.

SELECT datname AS database_name, pg_size_pretty(pg_database_size(datname)) AS size
FROM pg_database
ORDER BY pg_database_size(datname) DESC;

And here is a further breakdown of numbers in a database

DO $$
DECLARE
    rec         RECORD;
    rec_count   bigint;
    total_rows  bigint := 0;
    total_items int := 0;
BEGIN
    -- Create a temporary table to store individual counts
    CREATE TEMP TABLE temp_table_counts (
        table_name text,
        row_count  bigint
    ) ON COMMIT DROP;

    -- Loop through each user table and view (exclude system schemas)
    FOR rec IN
        SELECT table_schema, table_name, table_type
        FROM information_schema.tables
        WHERE table_schema NOT IN ('pg_catalog', 'information_schema')
          AND table_type IN ('BASE TABLE', 'VIEW')
        ORDER BY table_name
    LOOP
        BEGIN
            -- Dynamically count the rows for the current table/view
            EXECUTE format('SELECT count(*) FROM %I.%I', rec.table_schema, rec.table_name)
            INTO rec_count;
        EXCEPTION WHEN OTHERS THEN
            -- In case of error (for example, if the view is not countable), set to NULL
            rec_count := NULL;
        END;

        -- Insert the result into our temporary table.
        INSERT INTO temp_table_counts(table_name, row_count)
        VALUES (rec.table_schema || '.' || rec.table_name, rec_count);

        -- Accumulate totals (if count succeeded)
        IF rec_count IS NOT NULL THEN
            total_rows := total_rows + rec_count;
        END IF;
        total_items := total_items + 1;
    END LOOP;

    -- Return the per-table results
    RAISE NOTICE 'Table Name | Row Count';
    FOR rec IN 
        SELECT table_name, row_count 
        FROM temp_table_counts 
        ORDER BY table_name
    LOOP
        RAISE NOTICE '% | %', rec.table_name, rec.row_count;
    END LOOP;

    -- Return the summary totals
    RAISE NOTICE '--------------------------------------';
    RAISE NOTICE 'Total Rows: %, Total Tables/Views: %', total_rows, total_items;
END $$;

And on running this is the kind of result it should return

SQL Server / Postgres – Pattern for Distinct List / Combo Box source

This primarily refers to variable text fields but in all of the CRUD systems I’ve ever designed every single page has contained fields that use lookup dialogs. Their sheer ubiquity makes their design and operation particularly important significantly impacting an applications feel to users.

In principle sourcing the values for a lookup appears to be a very simple choice.

  • Hard coded through the user interface (e.g Y/N)
  • Referenced to a separate “lookup” table

  • As you design more systems you start to see further important options

  • Referenced to distinct values sourced from a self referencing query of the table
  • Whether to store values as foreign keys
  • Whether to enforce validation at the database level.

  • Some general rules I follow;

    1)If the application allows users to bulk import data into a table consider using referencing to a distinct values list sourced from a self referencing query of the table.
    2)Almost NEVER alias values / use foreign keys in tables that will receive imported data but often better not to use it even if there is no import facility.
    3)The one exception to rule 2 can be customer account codes – account codes for clients can be incredibly important and often link to a table that specifies say a company name and address – things that may change in time. Users really need to be trained to understand the derivation of foreign keys to use systems or the codes will generally break import routines or break referencing and require correction. Account codes tend to be a weird case of a derived surrogate where the value is obviously derived from the lengthened value – e.g AMZN for Amazon or MSFT for Microsoft in NASDAQ listings. This works for NASDAQ because of its ubiquity and because NASDAQ has educated its users be prepared that you will need to educate your users if you use this pattern. For this reason my general rule is only use foreign key aliases for Account codes.
    4)If you have a field or table that will receive values which may be the subject of a lookup – don’t be too harsh with the field validation in the database data definition and be more permissive with the field variable type and length than you might expect.

    And here is suggested SQL that can be used to identify distinct values in a field.

    SELECT DISTINCT account_code FROM table_name;
    

    Postgres : Adjust default Database to inherit PostGIS and pgRouting on CREATE

    In Postgres when an administrator creates a database in the background Postgres is copying a template database.
    As default the new database is a copy of a database that is created at install called template1. This database is usually hidden and even if you are logged in as the Postgres superuser this database is not listed when you search on the server for all available database.

    Normally an administrator that is limited to dbcreator rights does NOT have rights to install extensions. This is particularly an issue when dealing with geography administrators that need to create databases with PostGIS and pgrouting extensions but you don’t want to elevate these administrators to superusers.

    Superuser administrators can however install these extensions on the template database which means that when an administrator with dbcreator rights logins in and runs the create database command the new database created will inherit the extensions postgis and pgrouting (and any other extensions you add to template1).

    Steps to Alter template1:
    Connect to the PostgreSQL Instance as a Superuser:

    Use a superuser account (e.g., postgres) to connect to the PostgreSQL instance.

    Allow template1 to be modified:

    By default, the template databases are marked as “databases not allowed for connection.” You need to change this setting temporarily.

    ALTER DATABASE template1 IS_TEMPLATE = false;
    

    Connect to template1:

    Now, connect to the template1 database.

    \c template1

    Install the PostGIS and pgRouting Extensions in template1:

    Once connected to template1, install the extensions:

    CREATE EXTENSION postgis;
    CREATE EXTENSION pgrouting;
    

    Restore template1 to be a template database:

    After installing the extensions, you need to restore template1 as a template:

    ALTER DATABASE template1 IS_TEMPLATE = true;

    Prevent Modifications to template1 (Optional):

    To prevent accidental changes to template1 in the future, you can revoke connection permissions:

    REVOKE CONNECT ON DATABASE template1 FROM public;

    Future Databases Will Inherit PostGIS and pgRouting:
    Now, whenever a new database is created using the template1 template, it will automatically inherit the PostGIS and pgRouting extensions:

    
    CREATE DATABASE new_database_name;

    — This new database will have PostGIS and pgRouting installed.

    Caution:
    Modifying the default template1 means all databases created in the future will have the extensions, which might not be ideal if you don’t want PostGIS or pgRouting in every database. You could alternatively create a custom template and specify it when creating spatial databases.

    Example of creating a custom template:

    CREATE DATABASE template_postgis TEMPLATE template1;
    \c template_postgis
    CREATE EXTENSION postgis;
    CREATE EXTENSION pgrouting;
    ALTER DATABASE template_postgis IS_TEMPLATE = true;

    This way, you can create new spatial databases by specifying this custom template:

    CREATE DATABASE spatial_db TEMPLATE template_postgis;

    This approach provides more control over which databases include the extensions.

    Postgres : Create Table SQL Syntax Best Practice

    For primary keys it is now the recommendation to use identity during table creation.

    Apparently it explicitly conveys the purpose of the column and aligns with SQL standards thus it is important to use when possible rather than the former serial command. In theory this should be more portable as well.

    Depending on what source you read you may or may not need the following extension to create UUIDs – I needed it but I am told it should be in the main library.

    CREATE EXTENSION IF NOT EXISTS pgcrypto WITH SCHEMA public;

    So here is standard code for creating a table with three columns

    CREATE TABLE t001example 
    (
    pkid bigint GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
    genuuid uuid DEFAULT gen_random_uuid(),
    sitename varchar(500)
    );

    And these are example values

    Geoserver – Setting up Users / Roles / Workspaces and Layers

    Create Users
    Firstly ensure you are logged in as administrator ONLY the system administrator can set up users and roles and the default geoserver username/password of the administrator is admin/geoserver

    The sa login has additional menus as here

    Go to the Security menu on left towards the bottom and find line marked – Users/Groups/Roles

    Go to the Roles tab

    Look to the green circle with white cross and add new role and complete as appropriate

    Note once you have created the role you will not be able to edit the name of that role so you have to delete the role and then create a role with a different name if you don’t want the old role name to continue.

    Next we create the users

    Go to the Users/Groups tab

    Look to the green circle with white cross and add new user and within the Roles taken section ensure you move the role you created from the left box to the right see grey above.

    The new username should now appear in the User list

    You grant admin privileges to the workspaces through the Workspace section as follows

    Within the edit workspace section go to security

    And give admin rights to the role you wish to link to the workspace

    Note that you still won’t see anything until you go into layer edit and edit the security as follows.

    Within the Security section you can also see the roles added

    Add POSTGIS extension to Azure Postgres Database – Worked Example

    Firstly we need to allow for the extension on the server.
    Within the Azure portal navigate to server parameters

    Search for azure.extensions

    Select which extensions to add
    REMEMBER to save before exit!

    Wait a short period while the extension is loaded onto the server..

    Some points
    Azure Database for PostgreSQL supports a subset of key PostgreSQL extensions as listed HERE. This information is also available by running SHOW azure.extensions;. Extensions not listed in this document aren’t supported on Azure Database for PostgreSQL – Flexible Server. You can’t create or load your own extension in Azure Database for PostgreSQL.

    After extensions are allow-listed and loaded, these must be installed in your database before you can use them. To install a particular extension, you should run the CREATE EXTENSION command. This command loads the packaged objects into your database.

    The extension(s) should now be listed against your database.

    Postgres on Azure : also named Azure Database for Postgres Flexible Server : MS Azure

    Quick Start Azure Documentation

    Good link on tips and tricks Azure

    Login to your azure portal and choose Create a resource

    Next select the Azure Database for PostgresSQL Flexible Server

    Now indicate the starting parameters
    Please note IF you want to connect using PGAdmin I found issues with version 15 so chose version 14 it is not clear why this is some quick googling suggested it was the version of PGAdmin I was using. So at the bottom of this post I capture the about PgAdmin 4 that I am using as a client. I was able to connect to the server vis psql.
    Remember all of your parameters and choose an appropriate account

    Note you can stop the server if this is a development test copy which is good for reducing costs…

    Do this by navigating to the server and selecting the stop button at the top.

    Note give it time to start and stop not like me try and start the server before it has finished stopping!

    Postgres – Export a Spatial database and then Importing again (pg_dump + psql utilities)

    Lets take this from the beginning. The first thing you need to do is be on the physical machine where the database is running.

    Next check that the database server is up and running

    To do that

    Go to the windows bar click on the search icon and type in

    Services

    Scroll down to P and look for the name postgresql this and look to the Status column in the above example there are two postgres 11 and postgres 14 of the two only 14 shows as Running.

    There are several Utility executables that come with every postgres installation.

    The standard installation set up puts these into a directory equivalent to the following;

    We need to navigate at the command prompt to the location of the executable we are interested in. Technically you could qualify the path to, the executable when running a command, but I tend to navigate to the location.

    So firstly in the search button go to the command prompt

    cmd

    cd C:\Program Files\PostgreSQL\14\bin

    pg_dump -U postgres -W -F p -d dbspatialhub > c:\sql\dbspatialhub.sql

    pg_dump -U postgres -W -F p -d dbspatialhub > c:\sql\dbspatialhub.sql

    and here is a string that works if your database is on the local server with a non standard port / substitute 5433 for your port

    pg_dump -U postgres -h localhost -p 5433 -d dbsource > c:\sql\dbsourceexport.sql

    run this command – you will note that I qualify the output directory here. I prefer to do this as I don’t particularly like writing and deleting files into the program directory. I prefer this directory to be reserved for standard files and reduce the likelihood of me in anyway accidentally deleting an important file.

    You should be presented with the Password: option you will need to input the password for specified User (-U) complete

    Success will be shown as a return to the former prompt with no message.

    and running a DIR on the directory look for the sql file…

    Next from either psql prompt or from within PGAdmin4 create a database you are going to import the sql into and ensure that the postgis extension is installed. Remember this is the psql prompt and not DOS command line prompt.

    CREATE DATABASE dbspatialhubnew
    CREATE EXTENSION postgis
    

    Now you have to go back to the command prompt (and the correct directory).

    If you are on a different port add the following string -p 5434

    psql -U postgres -d dbspatialhubnew -f c:\sql\dbspatialhub.sql

    After being asked for the password the import will start and continue showing a list of various commands and data being executed and imported.
    Similar to below.

    Now you can go back to a client such as QGIS , link to the database to start confirming a successful import.

    Congratulations you have copied a database.

    It should be noted that as it stands I am having problems importing the layers table which has the format of the boundaries for tables.
    I will continue to work on this.

    Postgres – create a trigger to automatically calculate area of Polygon on edit

    This will calculate the area in hectares of a polygon on creation or edit..( referenced here as Before Insert or Update On)

    This appears to work for polygon and multi-polygon geometry types I have had a discussion with a colleague and he has indicated that the type of project may affect the accuracy of the measurement. Certainly there is a straight division in there so it would probably be possible to alter the division figure by empircally testing to get it as accurate as possible. Apparently with some popular projections the further north you are the less accurate the resultant figure. (Just something to bear in mind).

    CREATE OR REPLACE FUNCTION calc_area() 
    RETURNS trigger AS $BODY$ 
    BEGIN NEW.grossarea := ROUND((st_area(NEW.geom)/10000)::numeric,2); 
    RETURN NEW; 
    END; 
    $BODY$ LANGUAGE plpgsql; 

    What I found interesting about this was the field name is defined in the function and called from the trigger. There is possibly a better way of writing this to define the field in the trigger itself..

    And the trigger that goes on the table

    CREATE TRIGGER area_calculate BEFORE INSERT OR UPDATE ON public.t001landparcels FOR EACH ROW EXECUTE PROCEDURE calc_area();

    Postgres Command Line : psql : Create a switch to determine whether an Arc has a clockwise or anti-clockwise direction

    This is a post focused around Network distance measurement using POSTGIS and routing plugins.. This discusses the use of Ordnance survey Road Network information and its preparation in using it for routing.

    The Ordnance Survey open source road network layer includes roundabouts that have an attribute value of roundabout. Great but looking at them closely some of the constituent arcs are digitised in a clockwise direction while others are digitised in an anti-clockwise direction. When using dijkstra routing with weighting to prevent incorrect pathing it is necessary to ensure that networks are weighted in the correct fashion. Directional weighting only works if you know the direction and you weight appropriately. For use with directional routing in the UK roundabouts any directional weighting should prevent travel in anticlockwise direction. ST_reverse will correct incorrect direction BUT the Ordnance survey layer seems to have no attribute that consistently indicates whether an arc on a roundabout has or has not been digitised correctly. Marking lines with direction and viewing them we see the random nature of many arcs on roundabouts.

    Here is Sheriff Hall Roundabout on Edinburgh City Bypass

    Here is Straiton roundabout just north of the bypass

    and finally Hermiston Gate roundabout again following the theme on Edinburgh city bypass

    It got me thinking was there a way to determine whether arcs on roundabouts could be determined to be clockwise or anti-clockwise?

    Having thought about it in my head quite a bit I determined that it probably would be possible if we knew three points and could create some kind of virtual graph with the start point being at 6 and a finish at 12 with mid points at 9 if travelling in a clockwise position and 3 if travelling in an anti-clockwise position.

    I had a look around and the following post seemed to tally with the idea of three points and positive and negative relating to clockwise or anticlockwise.

    Maths to determine direction of an arc

    Having looked at this I set about working through the problem in Excel to see if I could get consistent results.

    Firstly I created a set of directions North West South and East and placed some coordinates that could be used in calculations.

    I then went forward and tested whether I could identify the direction of various arcs from these coordinates in excel using the formula identified on Stack Exchange.

    Here I replaced a,b,c with b Beginning, m Middle and f Finish

    And I decided to work through manually in excel to ensure that I had the maths correct.

    and further testing figures

    So firstly I create a separate table that just shows the roundabouts

    CREATE TABLE t001roundaboutsntroadlink as select id pkidt001, st_linemerge(geom) as geom from ntroadlink where formofway = 'Roundabout';

    In the above I use st_linemerge to ensure that all geometry is linestring as this is necessary to use the st_startpoint and st_endpoint postgis functions.

    Next I added the the required variables from stack overflow algebra to the line table

    ALTER TABLE t001roundaboutsntroadlink add column bx float(8),
    Add column by float(8),
    Add column mx float(8),
    Add column my float(8),
    Add column fx float(8),
    Add column fy float(8),
    Add column ux float(8),
    Add column uy float(8),
    Add column vx float(8),
    Add column vy float(8),
    Add column uxtimesvy float(8),
    Add column uytimesvx float(8),
    Add column uxv float(8);

    Next I needed to identify a b beginning, m middle and f finish point for each line that I wanted to test.

    b points (beginning)

    CREATE TABLE t002bpoints AS SELECT pkidt001 as pkidt001,st_startpoint(geom) as geom, st_x(st_startpoint(geom)) as bx, st_y(st_startpoint(geom)) as by from t001roundaboutsntroadlink;

    m points (middle)

    CREATE TABLE t002mpoints AS SELECT pkidt001 as pkidt001,st_lineinterpolatepoint(geom,0.5) as geom, st_x(st_lineinterpolatepoint(geom,0.5)) as mx, st_y(st_lineinterpolatepoint(geom,0.5)) as my from t001roundaboutsntroadlink;

    f points (finish)

    CREATE TABLE t002fpoints AS SELECT pkidt001 as pkidt001,st_endpoint(geom) as geom, st_x(st_endpoint(geom)) as fx, st_y(st_endpoint(geom)) as fy from t001roundaboutsntroadlink;

    It was then a case of simple update queries to complete the table

    update t001roundaboutsntroadlink set bx = st_x(st_startpoint(geom));
    update t001roundaboutsntroadlink set by = st_y(st_startpoint(geom));
    update t001roundaboutsntroadlink set mx = st_x(st_lineinterpolatepoint(geom,0.5));
    update t001roundaboutsntroadlink set my = st_y(st_lineinterpolatepoint(geom,0.5));
    update t001roundaboutsntroadlink set fx = st_x(st_endpoint(geom));
    update t001roundaboutsntroadlink set fy = st_y(st_endpoint(geom));
    update t001roundaboutsntroadlink set ux=mx-bx;
    update t001roundaboutsntroadlink set uy=my-by;
    update t001roundaboutsntroadlink set vx=fx-mx;
    update t001roundaboutsntroadlink set vy=fy-my;
    update t001roundaboutsntroadlink set uxtimesvy = ux*vy;
    update t001roundaboutsntroadlink set uytimesvx= uy*vx;
    update t001roundaboutsntroadlink set uxv = uxtimesvy-uytimesvx;

    Labelling up the roundabouts Hermiston Gate now looks like

    And Sheriff Hall Roundabout now looks like this

    Compared with a correctly directed roundabout

    CREATE TABLE t001roundaboutsntroadlinkcorrected AS TABLE t001roundaboutsntroadlink;

    And now correct the items display as previous and see what we see.

    UPDATE t001roundaboutsntroadlinkcorrected set geom = st_reverse(geom) where uxv > 0;

    Sheriff hall roundabout now

    and some proof that reasonable number of lines were updated.

    Which is an indication that all roundabouts arcs have been corrected properly

    But a zero uxv value indicates a straight line.

    It should however be possible to match starts with finishes for overlying points and where a line has 0 value of uxv and its ends and finishes are not matched with adjacent opposites create a switch to reverse the direction of all lines that are incorrect compared to their neighbours thus only correcting incorrect directions. Haven’t done that in this case.

    023 Postgres – Ranking and the Timestamp variable

    An investigation of ranking / the timestamp variable the time variable and the interval variable.

    Hours minutes and seconds
    Hours minutes and tenths of seconds
    Hours minutes and hundredths of seconds
    Hours minutes and thousandths of seconds

    So to highlight the examples I will first create a databsae called timeexampledb

    CREATE database timeexampledb;

    Now lets connect to that database

    \c timeexampledb

    Now I create a table called timebucket that will hold examples of the different time formats.

    create table timebucket 
    (pkid serial primary key, 
    time1secondonly timestamp(0), 
    time2tenthsecond timestamp(1), 
    time3hundredthsecond timestamp(2), 
    time4timethousandthsecond timestamp(3));
    

    Next input some examples and see what we get.

    insert into timebucket values (1, now(),now(),now(),now());
    insert into timebucket values (2, now(),now(),now(),now());
    insert into timebucket values (3, now(),now(),now(),now());
    insert into timebucket values (4, now(),now(),now(),now());
    insert into timebucket values (5, now(),now(),now(),now());
    insert into timebucket values (6, now(),now(),now(),now());
    insert into timebucket values (7, now(),now(),now(),now());
    insert into timebucket values (8, now(),now(),now(),now());
    insert into timebucket values (9, now(),now(),now(),now());
    insert into timebucket values (10, now(),now(),now(),now());
    insert into timebucket values (11, now(),now(),now(),now());
    insert into timebucket values (12, now(),now(),now(),now());
    insert into timebucket values (14, now(),now(),now(),now());
    

    and lets see what that looks like

    Here you can see from the tenth of a second options where you hit right on a second then a digit will disappear.

    Now we can do ranking on these to determine position.

    Select pkid, 
    time1secondonly, 
    rank() over wn as rank from timebucket
    window wn as (order by time1secondonly)
    order by time1secondonly;

    This results in

    So lets change this to rank the next column along.

    Select pkid, 
    time2tenthsecond, 
    rank() over wn as rank from timebucket 
    window wn as (order by time2tenthsecond) 
    order by time2tenthsecond;

    Appears to be working but lets try the other columns.

    Select pkid, 
    time3hundredthsecond, 
    rank() over wn as rank from timebucket 
    window wn as (order by time3hundredthsecond) 
    order by time3hundredthsecond;

    Appears correct but for good measure thousandths of a second.

    Select pkid, 
    time4timethousandthsecond, 
    rank() over wn as rank from timebucket 
    window wn as (order by time4timethousandthsecond) 
    order by time4timethousandthsecond;

    And now lets add an interval column

    Alter table timebucket add column timeinterval time(0);
    

    But lets add a further time5 column that and update to now time so we can create some intervals

    Alter table timebucket add column time5 timestamp(0);
    Update timebucket set time5 = now();
    

    Now if we want to get the time between items we can make the following SQL

    Select pkid, 
    time5, 
    time1secondonly,
    time5-time1secondonly as tinterval 
    from timebucket;
    

    And we get

    Lets try with a different time column

    Select pkid, 
    time5, 
    time4timethousandthsecond,
    time5- time4timethousandthsecond as tinterval 
    from timebucket;

    So next I reduce pkid record 14 by a day and re run to see what happens.

    Update timebucket set time4timethousandthsecond='2019-12-04' where pkid=14;
    

    and run the former select again;

    Select pkid, 
    time5, 
    time4timethousandthsecond,
    time5- time4timethousandthsecond as tinterval 
    from timebucket;

    and we see the interval is correctly recording.

    Now if we want to rank on tinterval I was unable to do it directly from a query so I went ahead and updated the former timeinterval column as follows

    update timebucket set timeinterval=time5-time4timethousandthsecond;

    and now doing a select on this we get

    select pkid, timeinterval from timebucket;

    What we see is

    But we are not showing the fact that 14 should be 1 day this is because we should have defined timeinterval as an interval variable rather than a time(0) variable.

    So we can do this as follows and update appropriately.

    Alter table timebucket add column timeinterval2 interval;
    update timebucket set timeinterval2=time5-time4timethousandthsecond;
    select pkid, timeinterval2 from timebucket;

    And we get the right result

    And now lets rank these to check it is sorting them correctly.

    Select pkid, 
    time4timethousandthsecond, 
    timeinterval2, 
    rank() over wn as rank from timebucket 
    window wn as (order by timeinterval2) 
    order by rank;

    And we get the correct result

    022 Postgres – Setting up starting variables in psqlrc

    So how do we adjust the defaults for the command line prompt in psql Postgres

    Set up your psqlrc defaults

    Go to the command prompt and navigate to the following directory

    c:\Users\Mark\AppData\Roaming\Postgresql\

    and either find or create a file called

    psqlrc.conf

    This is a simple text file that you can edit or create in notepad.

    --------------------------
    --psqlrc set preferences--
    -- Author Mark Brooks --
    --------------------------
    
    \set QUIET 1
    
    \x auto
    \timing
    \set COMP_KEYWORD_CASE upper
    
    \pset border 2
    \pset pager off
    \pset null <NULL>
    \setenv editor 'C:\\Program Files (x86)\\Notepad++\\notepad++.exe'
    \set VERBOSITY verbose
    
    \set QUIET 0
    
    \echo 'Welcome to PostgreSQL \n'
    \echo 'Type :version to see PostgreSQL version \n'
    \echo 'Type :extensions to see the available extensions'
    
    \set version 'SELECT version();'
    \set extensions 'select * from pg_available_extensions;'

    This allows you for instance to set up which editor will appear when you perform the \e command

    021 Postgres with PostGIS plugin – Create junction table sites in catchments

    This is a working example of how it would be possible to set up code that would automatically allocate for example housing sites to School Catchment areas. It could also be used to list constraints relevant to particular housing sites. This is more complicated in that it will indicate what percentage of the housing site is within which catchment areas and therefore allows for a single housing site being in multiple catchment areas. I am planning on coming back and expanding on this post. In this respect it represents a refinement of this Post

    So we need two tables

    t001asites which has a geometry field called geom
    and another table which will be the catchments table called
    t002bcatchments which has a geometry field called geom.

    Both tables must have a serial primary key of pkid and both tables must be polygon data and the geom field MUST be defined as polygon and NOT multipolygon.

    Air code is as follows.

      1. Create table containing digitised polygons of housing sites.
      2. Create table containing digitised polygons of catchments.
      3. Measure the area of the housing sites and place that value in an area column within the housing sites table t001asites.
      4. Split the housing sites by the catchment boundaries ensuing that each split polygon inherits the catchment it was split by.
      5. Re-measure the areas of these split sites and add an area column to store the new calculations.
      6. Divide figure obtained in 5. by figure obtained in 3 which will indicate the proportion of the housing site is in which catchment.
      7. Perform a least remainder method on the individual sites grouped by their original housing sites to ensure the proportions sum to 1.

    So to the code

    BEGIN;
    SET LOCAL check_function_bodies TO FALSE;
    CREATE OR REPLACE FUNCTION part01catchjunctionmaker() returns void as $$
    Alter table t001asites add column area integer;
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part02catchjunctionmaker() returns void as $$
    Update t001asites set area=ST_Area(geom);
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part022catchjunctionmaker() RETURNS void AS $$
    DROP TABLE IF EXISTS t200;
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part03catchjunctionmaker() RETURNS void AS $$
    CREATE TABLE t200 AS select a.pkid as t001pkid, b.pkid as t002pkid, a.area as t001area, ST_intersection(a.geom, b.geom) as geom FROM t001asites a, t002bcatchments b;
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part04catchjunctionmaker() RETURNS void AS $$
    ALTER TABLE t200 add column pkid serial primary key, add column area integer,add column proportion decimal (10,9);
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part06catchjunctionmaker() RETURNS void AS $$
    UPDATE t200 SET area=ST_Area(geom);
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part07catchjunctionmaker() RETURNS void AS $$
    DELETE from t200 where area=0 or null;
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part08catchjunctionmaker() RETURNS void AS $$
    UPDATE t200 SET proportion= cast(area as decimal)/cast(t001area as decimal) WHERE area > 0;
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part088catchjunctionmaker() RETURNS void AS $$
    DROP table IF EXISTS t201;
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part09catchjunctionmaker() RETURNS void AS $$
    Create table t201 as Select pkid,t001pkid,t002pkid, t001area, area, proportion, sum(proportion) OVER (PARTITION BY t001pkid ORDER BY t001pkid, proportion) as cum_proportion FROM t200 ORDER BY t001pkid, proportion;
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part10catchjunctionmaker() RETURNS void AS $$
    Alter table t201 add column value decimal (14,9),
    Add column valuerounded integer,
    Add column cumulvaluerounded integer,
    Add column prevbaseline integer,
    Add column roundproportion integer;
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part11catchjunctionmaker() RETURNS void AS $$
    UPDATE t201 set value = proportion * 100;
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part12catchjunctionmaker() RETURNS void AS $$
    UPDATE t201 set valuerounded = round(value,0);
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part13catchjunctionmaker() RETURNS void AS $$
    update t201 set cumulvaluerounded = round((cum_proportion*100),0);
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part14catchjunctionmaker() RETURNS void AS $$
    update t201 set cumulvaluerounded=100 where cumulvaluerounded = 101;
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part15catchjunctionmaker() RETURNS void AS $$
    update t201 set prevbaseline = round((cum_proportion - proportion)*100);
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part16catchjunctionmaker() RETURNS void AS $$
    update t201 set roundproportion = (cumulvaluerounded-prevbaseline);
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part17catchjunctionmaker() RETURNS void AS $$
    DELETE from t201 where roundproportion=0 or null;
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part18catchjunctionmaker() RETURNS void AS $$
    alter table t201 add column proppercent decimal(3,2);
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part19catchjunctionmaker() RETURNS void AS $$
    update t201 set proppercent = cast(roundproportion as decimal)/100;
    $$ LANGUAGE SQL;
    COMMIT;
    

    and now a function to pull it all together;

    CREATE OR REPLACE FUNCTION createcjt()
    RETURNS TEXT AS
    $BODY$
    BEGIN
    PERFORM part01catchjunctionmaker();
    PERFORM part02catchjunctionmaker();
    PERFORM part022catchjunctionmaker();
    PERFORM part03catchjunctionmaker();
    PERFORM part04catchjunctionmaker();
    PERFORM part06catchjunctionmaker();
    PERFORM part07catchjunctionmaker();
    PERFORM part08catchjunctionmaker();
    PERFORM part088catchjunctionmaker();
    PERFORM part09catchjunctionmaker();
    PERFORM part10catchjunctionmaker();
    PERFORM part11catchjunctionmaker();
    PERFORM part12catchjunctionmaker();
    PERFORM part13catchjunctionmaker();
    PERFORM part14catchjunctionmaker();
    PERFORM part15catchjunctionmaker();
    PERFORM part16catchjunctionmaker();
    PERFORM part17catchjunctionmaker();
    PERFORM part18catchjunctionmaker();
    PERFORM part19catchjunctionmaker();
    RETURN 'process end';
    END;
    $BODY$
    LANGUAGE plpgsql;
    

    019 Postgres and Postgis: Load multiple shape files into a Postgis database

    Environment
    Windows 10
    Postgresql 11

    To start you need a set of shape files that you know the location of.

    For demonstration purposes I use the OS Open Road open data shape files.

    Link to OS Open Data

    Next within explorer I create a directory in the c:\ root called data.
    And then I create two subdirectories.
    1 – shp
    2 – sql

    Place all the shape files you wish to be loaded into a postgis database into the newly created c:\data\shp directory.

    Next we will make sql text files that can be used to transfer the information from shape format into Postgis. At this point it is not necessary to even know what database you are going to place the shape files into.

    Next open up the command prompt and navigate to the bin directory of your particular postgres installation. Standard as follows :-

    Next I copy in the below text.

    For %f in (C:\data\shp\*shp) do Shp2pgsql –s 27700 %f public.%~nf > C:\data\sql\%~nf.sql

    This will take all the shape files in the C:\data\shp\ directory and create sql text files placing them in the C:\data\sql\. After completion you may wish to go to the directory and check that creation has occurred. You can even look at the text within a single file preferably with some kind of code editor to check that it looks approximately correct.

    Next within psql I create the target database and apply the postgis extension.
    create database osopenroaddb;
    create extension postgis;

    Next, unless you have a lot of time on your hands you will want to go to the
    C:\Users\Mark\AppData\Roaming\postgresql
    and open up the pgpass.conf file and place the following line in it.

    localhost:5432:*:postgres:yoursecretpassword

    Without that line you will be asked for a password everytime a new sql file is run.

    Next back in command prompt bin directory do the following. Note no password it takes it from the pgpass.conf file.

    For %f in (c:\data\sql\*sql) do psql –h localhost –d osopenroaddb –U postgres –f %f > nul

    If you have a lot of big files this could take a while to run. But once finished all your shape files should now be in your postgis instance in the specified database.

    018 Postgres : Export Data and Structure of a Single database from a Postgres instance / Import Data and Structure of a Single database into a Postgres Instance

    Demonstration environment and programs
    Windows 10
    Postgres Version : 11.2
    QGIS desktop version : 3.4.4

    My working through of a process to export a single database (structure and data) from a Postgres Instance, the database has PostGIS and pgrouting extensions enabled, followed by importing into in this example the same instance but in principle could be a different instance.
    Access the command prompt (RUN AS ADMINISTRATOR)

    PLEASE NOTE run the command prompt as administrator or you will get frequently get an ACCESS DENIED message after using pg dump command.

    Navigate to the directory of the PostgresVersion from which you wish to export the database. This will typically be the bin subdirectory of the version of your postgres ( here 11 ). You can ensure that pg_dump.exe is here if you do a dir on the directory to reveal alternatively you could reference the full path to pgdump and then pass the parameters to it subsequently.

    eg

    Next place in the parameters of the database what database you wish to export along with the name that you want to call the exported file and then hit return.

    pg_dump -U postgres -p 5432 edinburghrouting > c:\dbexport.pgsql

    Hitting return depending on the security of your instance you will be prompted for a password.

    Enter the password hit return

    When I do this on my home computer there is no return message but going into the C drive I can see that dbexport.pgsql now exists.

    Next we want to create a blank database this is required to import the data and structure into.
    This we do in psql signed in as a user with sufficient privelege.

    Now back in the command line running as administrator we can run the following.

    psql -U postgres importededinburghrouting < c:\dbexport.pgsql

    Pressing return depending on your security you should be asked for your password.

    Once this is done it goes through a process of recreating the structure of the database then importing all the data

    For me the first lines look like this

    and the last look like this

    Now looking at the instance as a whole we can see the imported database

    and here I am displaying geographical information through QGIS to get an idea of the data and ensure that it appears to be all correct.

      SUMMARY


    There are quite a lot of tutorials online on how to do this but most seem to skip over some details - I've tried to be as accurate as possible but depending on you setup there may be differences. Nonetheless this is an extremely important task to perform so worth practicing to get right.

    MS Access Function : Loop through tables and export to csv

    A function that will loop through an access database and export all tables to csv and xls.

    Useful for subsequent import through QGIS into Postgres.

    Public Function ExportAll()
    Dim obj As AccessObject, dbs As Object
    Dim strFolder As String
    strFolder = "c:\"
    Set dbs = Application.CurrentData
    For Each obj In dbs.AllTables
    If Left(obj.Name, 4) <> "MSys" Then
    DoCmd.TransferText acExportDelim, , obj.Name, strFolder & obj.Name & ".csv", True
    DoCmd.TransferSpreadsheet acExport, acSpreadsheetTypeExcel9, obj.Name, strFolder & obj.Name & ".xls", True
    End If
    Next obj
    End Function

    QGIS and PostGIS : Identifying direction of a vector

    If using the dijkstra function with direction turned on it is important to identify the order in which the nodes of a vector line have been digitised. This is called the direction, dijkstra can use this with a reverse_cost attribute to handicap wrong movement along lines to such an extent that the correct path can be calculated around things like roundabouts.

    Here is an example of the roundabout in Straiton in Edinburgh just North of the A720 bypass. While some of the lines have a correct anti clockwise orienation clearly some have been incorrectly digitised.

    First we can see this by displaying the network in QGIS but using the styling to arrow the direction.

    The function that can be used to reverse such inaccuracies if you can’t resort to buying a correct dataset try ST_REVERSE

    017 Postgres command line : psql : Notices

    RAISE NOTICE can provide the same function as Message Box in VBA ie you can use it to comment on the progress of a script. RAISE NOTICE is not supported by SQL so you can’t place it in scripts containing SQL they need to be in plpgsql scripts. This isn’t too much of a hassle as the way I am working at the moment I am calling the SQL anyway from plpgsql so I can place my message boxes in there.

    No VBA Ok buttons.

    CREATE OR REPLACE FUNCTION noticeexample() returns void as $$
    BEGIN
    RAISE NOTICE 'ONE FINE DAY IN THE MIDDLE OF THE NIGHT';
    END;
    $$
    LANGUAGE PLPGSQL;