Postgres Command Line : psql : Create a switch to determine whether an Arc has a clockwise or anti-clockwise direction

This is a post focused around Network distance measurement using POSTGIS and routing plugins.. This discusses the use of Ordnance survey Road Network information and its preparation in using it for routing.

The Ordnance Survey open source road network layer includes roundabouts that have an attribute value of roundabout. Great but looking at them closely some of the constituent arcs are digitised in a clockwise direction while others are digitised in an anti-clockwise direction. When using dijkstra routing with weighting to prevent incorrect pathing it is necessary to ensure that networks are weighted in the correct fashion. Directional weighting only works if you know the direction and you weight appropriately. For use with directional routing in the UK roundabouts any directional weighting should prevent travel in anticlockwise direction. ST_reverse will correct incorrect direction BUT the Ordnance survey layer seems to have no attribute that consistently indicates whether an arc on a roundabout has or has not been digitised correctly. Marking lines with direction and viewing them we see the random nature of many arcs on roundabouts.

Here is Sheriff Hall Roundabout on Edinburgh City Bypass

Here is Straiton roundabout just north of the bypass

and finally Hermiston Gate roundabout again following the theme on Edinburgh city bypass

It got me thinking was there a way to determine whether arcs on roundabouts could be determined to be clockwise or anti-clockwise?

Having thought about it in my head quite a bit I determined that it probably would be possible if we knew three points and could create some kind of virtual graph with the start point being at 6 and a finish at 12 with mid points at 9 if travelling in a clockwise position and 3 if travelling in an anti-clockwise position.

I had a look around and the following post seemed to tally with the idea of three points and positive and negative relating to clockwise or anticlockwise.

Maths to determine direction of an arc

Having looked at this I set about working through the problem in Excel to see if I could get consistent results.

Firstly I created a set of directions North West South and East and placed some coordinates that could be used in calculations.

I then went forward and tested whether I could identify the direction of various arcs from these coordinates in excel using the formula identified on Stack Exchange.

Here I replaced a,b,c with b Beginning, m Middle and f Finish

And I decided to work through manually in excel to ensure that I had the maths correct.

and further testing figures

So firstly I create a separate table that just shows the roundabouts

CREATE TABLE t001roundaboutsntroadlink as select id pkidt001, st_linemerge(geom) as geom from ntroadlink where formofway = 'Roundabout';

In the above I use st_linemerge to ensure that all geometry is linestring as this is necessary to use the st_startpoint and st_endpoint postgis functions.

Next I added the the required variables from stack overflow algebra to the line table

ALTER TABLE t001roundaboutsntroadlink add column bx float(8),
Add column by float(8),
Add column mx float(8),
Add column my float(8),
Add column fx float(8),
Add column fy float(8),
Add column ux float(8),
Add column uy float(8),
Add column vx float(8),
Add column vy float(8),
Add column uxtimesvy float(8),
Add column uytimesvx float(8),
Add column uxv float(8);

Next I needed to identify a b beginning, m middle and f finish point for each line that I wanted to test.

b points (beginning)

CREATE TABLE t002bpoints AS SELECT pkidt001 as pkidt001,st_startpoint(geom) as geom, st_x(st_startpoint(geom)) as bx, st_y(st_startpoint(geom)) as by from t001roundaboutsntroadlink;

m points (middle)

CREATE TABLE t002mpoints AS SELECT pkidt001 as pkidt001,st_lineinterpolatepoint(geom,0.5) as geom, st_x(st_lineinterpolatepoint(geom,0.5)) as mx, st_y(st_lineinterpolatepoint(geom,0.5)) as my from t001roundaboutsntroadlink;

f points (finish)

CREATE TABLE t002fpoints AS SELECT pkidt001 as pkidt001,st_endpoint(geom) as geom, st_x(st_endpoint(geom)) as fx, st_y(st_endpoint(geom)) as fy from t001roundaboutsntroadlink;

It was then a case of simple update queries to complete the table

update t001roundaboutsntroadlink set bx = st_x(st_startpoint(geom));
update t001roundaboutsntroadlink set by = st_y(st_startpoint(geom));
update t001roundaboutsntroadlink set mx = st_x(st_lineinterpolatepoint(geom,0.5));
update t001roundaboutsntroadlink set my = st_y(st_lineinterpolatepoint(geom,0.5));
update t001roundaboutsntroadlink set fx = st_x(st_endpoint(geom));
update t001roundaboutsntroadlink set fy = st_y(st_endpoint(geom));
update t001roundaboutsntroadlink set ux=mx-bx;
update t001roundaboutsntroadlink set uy=my-by;
update t001roundaboutsntroadlink set vx=fx-mx;
update t001roundaboutsntroadlink set vy=fy-my;
update t001roundaboutsntroadlink set uxtimesvy = ux*vy;
update t001roundaboutsntroadlink set uytimesvx= uy*vx;
update t001roundaboutsntroadlink set uxv = uxtimesvy-uytimesvx;

Labelling up the roundabouts Hermiston Gate now looks like

And Sheriff Hall Roundabout now looks like this

Compared with a correctly directed roundabout

CREATE TABLE t001roundaboutsntroadlinkcorrected AS TABLE t001roundaboutsntroadlink;

And now correct the items display as previous and see what we see.

UPDATE t001roundaboutsntroadlinkcorrected set geom = st_reverse(geom) where uxv > 0;

Sheriff hall roundabout now

and some proof that reasonable number of lines were updated.

Which is an indication that all roundabouts arcs have been corrected properly

But a zero uxv value indicates a straight line.

It should however be possible to match starts with finishes for overlying points and where a line has 0 value of uxv and its ends and finishes are not matched with adjacent opposites create a switch to reverse the direction of all lines that are incorrect compared to their neighbours thus only correcting incorrect directions. Haven’t done that in this case.

Azure App Service – Simple Web App Service IP Restriction

Its written about extensively but here are my links and notes on setting up IP restriction for web apps in Microsoft Azure. You can allow or deny specific IPs. Setting up a restriction immediately denys all other IPs.

For this you will need
1) Azure Account with Web App Service running.
2) the IP you specifically wish to ALLOW

Firstly a useful service to identify the external IP you or your customers are using.
WhatIsMyIP.com

1) Navigate to portal.azure.com and go into your account

2) Identify the App Service you wish to place IP restrictions on.

3) Within Settings look for Networking

4) Scroll down to Access Restrictions

and select

    Configure Access Restrictions

5) Adjust as appropriate

Restrictions occur almost immediately so usually within 30 seconds or a browser refresh new policy will be in place.

SQL Azure – Reset Primary Key Identity/Index to Zero after Deleting all records

I was faced with a situation where I needed to set up an index for widgets but I wanted those to start at zero. I made some attempts to import information and deleted those records from the table several times and was faced with a situation where users might see the index occasionally. As a result I wanted the index to start from 1 and at least at the start go up to the approximate number of widgets.

Its a quick search using google to get this but wanted to reset the identity on a blank table.

In SSMS / navigate to the database and open a new query window

DBCC CHECKIDENT (dbo.t001table, RESEED,0);
GO

Chrome UPDATE – Running a web site in Application mode using a BAT file

I’m running Google Version 84.0.4147.135 (Official build) (64-bit) and last week I noticed that Chrome seemed to update the way you create applications on the desktop. And when setting up new applications there seems to be an ugly introduction of the URL sometimes. Annoying.

I found a source that indicated that you could run a website in application mode by running the following at command line.
This results in a really nice chrome window with your chosen web site within it.

start Chrome.exe --app=https://bbc.co.uk

So I went old school to get a link on desktop!

The good old bat file

@ECHO OFF
start Chrome.exe --app=https://lindoresproduction.azurewebsites.net

Proof of Negative for database design : Put simply – Try to keep data in one easily searchable place!

Its always satisfying when you are able to conceptualise a problem that then seems to be appear widely. Invariably when I do this it is something that others reasoned on long before nonetheless it is still satisfying to arrive at the table albeit late.

I have done a lot of work in the public sector and many of the legal requirements constantly change and are not always logical. This can lead to contradictory requirements which software designers and particularly vendors can’t or won’t keep up with. Inevitably we end up with satellite procedures to catch edge cases. This is causing quite a lot of friction because of something I have started thinking about as Proof of Negative.

To prove a negative you have to access all information that has ever been – often to prove a positive you just need to find the one instance and then you can stop looking. Therefore it is on average harder to prove a negative than it is to prove a positive.

It was highlighted to me the other day when looking at a map. We were looking for a planning application and we were certain it existed. We had no reason to presume it didn’t exist however we were unable to find it. I eventually went to the map and looked at every planning application in an area none existed and so the conclusion was that it had never existed – time consuming but highlighted to me the importance of having all the information in one format that was easily comparable. Quite often switching between systems there are reconciliation issues either gaps or overlaps or the search options are wildly different which leaves you needing then to reconcile between searches additional difficulty and additional time.

So something to keep in mind when moving data about (Try and keep it in one easily searchable place ) Still nice to discover or should I say re-discover a fundamental truth .. It is sometimes referred to as the philosophic burden of proof and has been debated and thought about extensively.

Wikipedia Proof of Negative

New System Assessment : A Simple Set of Guidelines for judging material differences between Software

Here’s a short guide I put together for myself to help me assess the value of enterprise systems that I may be looking for or designing myself. Not hard and fast and some points may not be relevant to every application but this should form a good basis from which to start. Generally a higher number is better than a lower number but within numbers items are more cumulative and of equal value.

1)Paper based = Everything from single sheets to formal Books of Accounts
2)Simple Digital = spreadsheet and other file based storage
3)Cloud based Simple Digital = spreadsheet and other file based storage – this can include file based blob storage
4.1)LAN relational database – normalised but otherwise fairly locked down
4.2)LAN relational database + easy import and export of data other than spatial
4.3)LAN relational database + report writer definable by users
4.4)LAN relational database + spatially enabled
5.1)Internet available relational database – normalised but otherwise fairly locked down
5.2)Internet available relational database + easy import and export of data other than spatial
5.3)Internet available relational database + report writer definable by users
5.4)Internet available relational database + spatially enabled (noted not all applications require a spatial aspect)
5.5)Internet available relational database + separate site available for public access view only
5.6)Internet available relations database + separate access to public access to edit and add information

Requirements
+ UPtime / reliability
+ SPEED – after reliability very important
+ Good master details forms (sounds easy but bootstrap is not great for this)
+ GOOD Search – the larger the system the more important the search options are in it. Look out for things like Automated Objective Index creation (case sensitive search is more of a hassle than a benefit a lot of the time – odd wild cards – or not being wild by default are problem areas)

Important Points
+ Portability can you up sticks and move it to a different cloud provider
+ Non – Proprietary (Can be a gotcha if its really obscure or has weird security, often linked to portability)
+ two factor authentication
+ Responsive Design (Not necessarily as important as you think in some applications see Github / Open Layers arguably don’t need it)
+ Dynamic saving – I really miss the fast dynamic saving environment of MS Access the save button isn’t quite the same
+ Built in CRM – I generally like them especially if data can go straight into correct files.. Increasingly I am designing systems with inbuilt CRM – I know this might not be to every organisations taste but it is jarring to go between systems and normalisation between systems is usually very sub optimal – plus you frequently come across proof of negative problems when data required for a specific task is not held appropriately.
+ Satellite application for customers to enter information (why not let your customers maintain their information can be great and can empower customers Fintech anyone?)

= All the bells and whistles
Spatially enabled internet available database with two factor authentication – report writer and easy import and export of spatial and attribute information with public access to edit and add information by customers (Possibly via satellite application).

GIT Hub Repository : getting started with online code storage

One of the principles of the 3 2 1 rule of backup storage is that you should have one backup copy off site. Great but for code this can be tricky as you constantly need to be changing so how do you set up a continuous pipeline to allow for changes in code. Now you could use something like Google Cloud which is excellent and is completely acceptable however I wanted to use GIT mainly because it is better adapted to actually editing code if needed be (important for removing security parameters in things like webconfig files) , has finer grained level of detail in terms of control and is considered a professional format for storing code and version code. Of course with greater flexibility comes great complexity. So here are my beginnings on using GITHub.

First things first register with GITHub

GIT Hub Website

I would then recommend that you download Git Bash – this is a local program that will allow you to better synch your files between your development machine and your online github repository. I found the following Video Tutorial excellent in getting up and running in particular in downloading and installing GITHub Bash and linking it to your GITHUB repository.

GIT Hub (Good Tutorial on starting out)

Now here I am going to setup my local repository’s in the root directory and simply call the directory git

c:/git

For my purposes I have started by create a repository in online GITHUB

The open GITBASH

You will be presented with the following window

The first tricky thing is to navigate the bash prompt
execute the following commands

cd /c
cd git

Next we are going to Clone a repository from out GITHUB web account to our local machine.
Cloning does two main things it creates a sub-directory in this case within the c:\git directory and then copies all the files down from the web repository.

The command to do this is
git clone (followed by the url of your repository obtained here)

and execute it something similar to the below should now appear – a new sub directory should appear on your local machine with all files within it. Congratulations you have just downloaded all these files you could do this with any open source project on GITHub

Next lets see how we can download from the website masters or alternatively upload from local.

You might be thinking well why do I want to upload from local when I see that you can cut and paste directly into the web browser and commit from there. Yes good but the browser user interface won’t allow you to transfer more than 100 files which for web applications is a killer.

Now the same thing can be done with whole directories so simply use explorer to upload the item into the local directory and then push from there.

OK uploading from LOCAL
As an example lets make a change to the readme file and try and get it to be accepted onto our online account.
Got to your new directory and edit the README.md file in your editor of choice with some kind of change.

eg

Now back in bash perform the following;

Navigate to the directory and if the file was already

git commit -m “Fourth commit” README.md

This has committed the change in that particular file to the local git repository – this is important without a commit git doesn’t know anything has changed. Next you want to commit the local to the master web version

Execute the following command
git push -u origin master

Now you can go to your web repository and you should see that README.md has changed

Now the same thing can be done with whole directories so simply use explorer to upload the item into the local directory and then push from there.

DOWNLOADING changes from GITHub Repository to local.
This time lets make an edit to GITHUB README.md through the web repository commit it

git pull origin master

And we see that local has now been updated with the web repository

But what if we need to add a complete directory or a file to local and then upload it.

Here we first need to add the directory or file to local

For example create a new file in the local directory called MyNewFile.txt

git add MyNewFile.txt

Before transfering this file to your github account you need to commit it first.

git commit -m “comment message” MyNewFile.txt

you then need to push this to your Git hub account

git push -u origin master

Going back to your GIT Hub repository on your account and refreshing the screen you should see the item.

The above processes should get you started with GITHUB if things start to go wrong with errors try the below.

Note it would appear that if you wish to overwrite a local file or directory and then add it to your remote repository you still have to go through the ADD procedure. (Initially I had assumed that because a directory might already be there you might not need to add it)

So if you were going to overwrite the output folder it would be a case of
git add -A
git commit -m "Output file overwritten"
git push origin master

REMOVING a repository and starting again..
Removing a repository from local which can be useful if like me you make changes to local and web version independently and it has issues merging the two and prevents the merge and starting out wasn’t sure how to fix everything maunally. I found the easiest way was to make sure you were happy with the online version. Delete the local copy and then clone to local again.

Firstly navigate to the repository directory you wish to delete and then

$ rm -rf .git

Now you can delete the directory and go forward but remember you will need to Clone the repository and start from there.

MS Access Function : Automate Normalisation or De-concatenate a field (MS Outlook export example)

I was contemplating a better way of storing our old emails and unhappy with some of the systems in place I started considering whether I could dump them into a database.

Thing is when you export from Outlook some of the standard fields in particular To and From are concatenated. Wouldn’t it be nice to separate those fields into their own table of names and addresses and reference them back to a main table of messages.

This is what I came up with.

For demonstrations purposes I will use two tables

t001parent
pkid - autonumber primary key
ccaddresses - memo or long text

and the child table

t002newchildren
pkid - autonumber primary key
ccaddress - string(150 should do it)
pkidt001 - number

and here is the blank second table

Next we create a user defined function

Public Function CreateChildTable()

    Dim db          As DAO.Database
    Dim rs          As DAO.Recordset
    Dim rsTarget    As DAO.Recordset

    Dim strSQL      As String
    Dim strField1   As String
    Dim strField2   As String
    Dim varData     As Variant
    Dim i           As Integer
    Dim intPKID     As Integer

    Set db = CurrentDb

    'Select all the ccaddresses from the parent table
    strSQL = "SELECT pkid,ccaddresses FROM t001parent"

    Set rsTarget = db.OpenRecordset("t002newchildren", dbOpenDynaset, dbAppendOnly)

    Set rs = db.OpenRecordset(strSQL, dbOpenDynaset)
    With rs
        While Not .EOF
            intPKID = !pkid
            varData = Split(!ccaddresses, ";") ' Get all semi colon delimited fields
               
            'add email addresses if there is only one email address there
            With rsTarget
            .AddNew
            !ccaddresss = Trim(varData(0)) ' remove spaces before writing new fields
            !pkidt001 = intPKID
            .Update
            End With
            

            'loop through addtional email addresses and add them as children to table
            For i = 1 To UBound(varData)
                With rsTarget
                    .AddNew
                    !ccaddresss = Trim(varData(i)) ' remove spaces before writing new fields
                    !pkidt001 = intPKID
                    .Update
                End With
            Next
            .MoveNext
        Wend

        .Close
        rsTarget.Close

    End With

    Set rsTarget = Nothing
    Set rs = Nothing
    db.Close
    Set db = Nothing

End Function

After running this we should

We’ve just nicely split the parent table ccaddesses field into constituent emails and normalised it as a child of the parent table.

MS SQL Azure – Conditional Computed Column Definition Using CASE

Setting aside for the moment the rights and wrongs of using persisted columns sometimes it is just a great way to add some automation to your database and make it clearer for the user.

But what if you want to add a conditional computed column for rows based on another value within that row. Here’s something I worked out.

ALTER TABLE ProjectManagement
ADD FutureorPast AS CAST
(
CASE
WHEN TargetDate > GetDate() or TargetDate is Null THEN 'FUTURE' WHEN TargetDate <= GetDate() THEN 'PAST'
ELSE ''
END as nvarchar(6)
)
GO

Nubuilder Forte – How to? Install on Windows – VIDEO INSTRUCTION

In a previous post I had mentioned that if you wanted a platform focused on database driven (MYSQL only) web application development, Nubuilder FORTE was an excellent choice.

On visiting their site the other day I noticed that they had published a new video out which describes how to install it on Windows. Configuration of Nubuilder is the hardest part and although I haven’t done it this video looks like an excellent instruction. So get yourself across there and have a go.

That link is here

  • How to Install Nubuilder on Windows

    Other links

  • Nubuilder main website
  • Nubuilder FORUM
  • MS SQL Azure – CONCAT_WS Working with addresses and nicely formatting separated fields

    I recently came across a very useful function that I am told was introduced in SQL Server 2017 called CONCAT_WS
    I have never used it in any other application other than SQL Server I hope it exists in MYSQL and PostgreSQL

    This will join together a series of fields with chosen separators and by combining it with NULLIF can be used to remove spaces and format address nicely.
    For applications that will at some point need addresses either for post or for information this function allows the display of addresses in a format the most clearly reflects the requirements of most postal systems. Needless to say it is likely that most of my systems will at some point transition to the use of this function.

    Here is a link to Microsofts documentation on the function CONCAT_WS

    Firstly let me review what I am starting to standardise on with regard to address fields which should be a balance between enough detail to store any address on the planet but not so much that it is overly complicated. I’ve expanded their descriptions somewhat. In my experience someone will have created an excel spreadsheet to start the process of recording information, often they standardise on column names such as address01/02/03 etc. If that breakdown exists then I have indicated the fields that I would normally expect to map those fields too.

    flatno-buildingdivision
    houseno-buildno
    housename-buildname
    streetfirst-address01
    streetsecond
    locality-address02
    towncity-address03
    stateoradminlocality
    postcode-address04
    country-address05

    And here is an example of the code implemented.

    CREATE VIEW v001formattedaddress AS SELECT pkid,
    email,
    CONCAT_WS(' ',NULLIF(dbo.t001addresstable.firstname,' '), NULLIF(dbo.t001addresstable.surname,' ')) AS fullname,
    CONCAT_WS(CHAR(13) + CHAR(10),
    NULLIF(dbo.t001addresstable.flatno, ' '), 
    NULLIF(dbo.t001addresstable.houseno,' '), 
    NULLIF(dbo.t001addresstable.housename,' '),
    NULLIF(dbo.t001addresstable.streetfirst,' '),
    NULLIF(dbo.t001addresstable.streetsecond,' '),
    NULLIF(dbo.t001addresstable.locality,' '),
    NULLIF(dbo.t001addresstable.towncity,' '),
    NULLIF(dbo.t001addresstable.stateoradminlocality,' '),
    NULLIF(dbo.t001addresstable.postcode,' '),
    NULLIF(dbo.t001addresstable.country,' '),
    REPLICATE('-', 30) ) AS addressconcat FROM dbo.t001addresstable;
    

    Additionally on reflection for a recent project I made up a list of countries that covers most in the world. For my project I put an include field next to them to allow system administrators to include whether these would be visible in the drop down. Clearly while overtime more and more countries may be added there I would expect it to be years or possibly decades before some of the values of the smaller nations are needed. (For my particular application anyway)

    I standardised on the following the 2 digit codes are the ISO Country code standard

    AD Andorra
    AE United Arab Emirates
    AF Afghanistan
    AG Antigua and Barbuda
    AI Anguilla
    AL Albania
    AM Armenia
    AO Angola
    AQ Antarctica
    AR Argentina
    AT Austria
    AU Australia
    AW Aruba
    AX Aland Islands
    AZ Azerbaijan
    BA Bosnia and Herzegovina
    BB Barbados
    BD Bangladesh
    BE Belgium
    BF Burkina Faso
    BG Bulgaria
    BH Bahrain
    BI Burundi
    BJ Benin
    BL Saint Barthélemy
    BM Bermuda
    BN Brunei Darussalam
    BO Bolivia, Plurinational State of
    BQ Bonaire, Sint Eustatius and Saba
    BR Brazil
    BS Bahamas
    BT Bhutan
    BV Bouvet Island
    BW Botswana
    BY Belarus
    BZ Belize
    CA Canada
    CC Cocos (Keeling) Islands
    CD Congo, the Democratic Republic of the
    CF Central African Republic
    CG Congo
    CH Switzerland
    CI Cote d’Ivoire
    CK Cook Islands
    CL Chile
    CM Cameroon
    CN China*
    CO Colombia
    CR Costa Rica
    CU Cuba
    CV Cape Verde
    CW Curaçao
    CX Christmas Island
    CY Cyprus
    CZ Czech Republic
    DE Germany
    DJ Djibouti
    DK Denmark
    DM Dominica
    DO Dominican Republic
    DZ Algeria
    EC Ecuador
    EE Estonia
    EG Egypt
    EH Western Sahara
    ER Eritrea
    ES Spain
    ET Ethiopia
    FI Finland
    FJ Fiji
    FK Falkland Islands (Malvinas)
    FO Faroe Islands
    FR France
    GA Gabon
    GB United Kingdom
    GB United Kingdom Northern Ireland
    GD Grenada
    GE Georgia
    GF French Guiana
    GG Guernsey
    GH Ghana
    GI Gibraltar
    GL Greenland
    GM Gambia
    GN Guinea
    GP Guadeloupe
    GQ Equatorial Guinea
    GR Greece
    GS South Georgia and the South Sandwich Islands
    GT Guatemala
    GW Guinea-Bissau
    GY Guyana
    HK Hong Kong SAR China
    HM Heard Island and McDonald Islands
    HN Honduras
    HR Croatia
    HT Haiti
    HU Hungary
    ID Indonesia
    IC Spain Canary Islands
    IE Ireland Republic
    IL Israel
    IM Isle of Man
    IN India
    IO British Indian Ocean Territory
    IQ Iraq
    IR Iran, Islamic Republic of
    IS Iceland
    IT Italy
    JE Jersey
    JM Jamaica
    JO Jordan
    JP Japan
    KE Kenya
    KG Kyrgyzstan
    KH Cambodia
    KI Kiribati
    KM Comoros
    KN Saint Kitts and Nevis
    KP Korea, Democratic People’s Republic of
    KR Korea, Republic of
    KW Kuwait
    KY Cayman Islands
    KZ Kazakhstan
    LA Lao People’s Democratic Republic
    LB Lebanon
    LC Saint Lucia
    LI Liechtenstein
    LK Sri Lanka
    LR Liberia
    LS Lesotho
    LT Lithuania
    LU Luxembourg
    LV Latvia
    LY Libyan Arab Jamahiriya
    MA Morocco
    MC Monaco
    MD Moldova, Republic of
    ME Montenegro
    MF Saint Martin (French part)
    MG Madagascar
    MK Macedonia, the former Yugoslav Republic of
    ML Mali
    MM Myanmar
    MN Mongolia
    MO Macau SAR China
    MQ Martinique
    MR Mauritania
    MS Montserrat
    MT Malta
    MU Mauritius
    MV Maldives
    MW Malawi
    MX Mexico
    MY Malaysia
    MZ Mozambique
    NA Namibia
    NC New Caledonia
    NE Niger
    NF Norfolk Island
    NG Nigeria
    NI Nicaragua
    NL Netherlands
    NO Norway
    NP Nepal
    NR Nauru
    NU Niue
    NZ New Zealand
    OM Oman
    PA Panama
    PE Peru
    PF French Polynesia
    PG Papua New Guinea
    PH Philippines
    PK Pakistan
    PL Poland
    PM Saint Pierre and Miquelon
    PN Pitcairn
    PS Palestine
    PT Portugal
    PY Paraguay
    QA Qatar
    RE Reunion
    RO Romania
    RS Serbia
    RU Russian Federation
    RW Rwanda
    SA Saudi Arabia
    SB Solomon Islands
    SC Seychelles
    SD Sudan
    SE Sweden
    SG Singapore
    SH Saint Helena, Ascension and Tristan da Cunha
    SI Slovenia
    SJ Svalbard and Jan Mayen
    SK Slovakia
    SL Sierra Leone
    SM San Marino
    SN Senegal
    SO Somalia
    SR Suriname
    SS South Sudan
    ST Sao Tome and Principe
    SV El Salvador
    SX Sint Maarten (Dutch part)
    SY Syrian Arab Republic
    SZ Swaziland
    TC Turks and Caicos Islands
    TD Chad
    TF French Southern Territories
    TG Togo
    TH Thailand
    TJ Tajikistan
    TK Tokelau
    TL Timor-Leste
    TM Turkmenistan
    TN Tunisia
    TO Tonga
    TR Turkey
    TT Trinidad and Tobago
    TV Tuvalu
    TW Taiwan
    TZ Tanzania United Republic of
    UA Ukraine
    UG Uganda
    US United States
    UY Uruguay
    UZ Uzbekistan
    VA Holy See (Vatican City State)
    VC Saint Vincent and the Grenadines
    VE Venezuela Bolivarian Republic of
    VG Virgin Islands, British
    VN Vietnam
    VU Vanuatu
    WF Wallis and Futuna
    WS Samoa
    YE Yemen
    YT Mayotte
    ZA South Africa
    ZM Zambia
    ZW Zimbabwe
    

    023 Postgres – Ranking and the Timestamp variable

    An investigation of ranking / the timestamp variable the time variable and the interval variable.

    Hours minutes and seconds
    Hours minutes and tenths of seconds
    Hours minutes and hundredths of seconds
    Hours minutes and thousandths of seconds

    So to highlight the examples I will first create a databsae called timeexampledb

    CREATE database timeexampledb;

    Now lets connect to that database

    \c timeexampledb

    Now I create a table called timebucket that will hold examples of the different time formats.

    create table timebucket 
    (pkid serial primary key, 
    time1secondonly timestamp(0), 
    time2tenthsecond timestamp(1), 
    time3hundredthsecond timestamp(2), 
    time4timethousandthsecond timestamp(3));
    

    Next input some examples and see what we get.

    insert into timebucket values (1, now(),now(),now(),now());
    insert into timebucket values (2, now(),now(),now(),now());
    insert into timebucket values (3, now(),now(),now(),now());
    insert into timebucket values (4, now(),now(),now(),now());
    insert into timebucket values (5, now(),now(),now(),now());
    insert into timebucket values (6, now(),now(),now(),now());
    insert into timebucket values (7, now(),now(),now(),now());
    insert into timebucket values (8, now(),now(),now(),now());
    insert into timebucket values (9, now(),now(),now(),now());
    insert into timebucket values (10, now(),now(),now(),now());
    insert into timebucket values (11, now(),now(),now(),now());
    insert into timebucket values (12, now(),now(),now(),now());
    insert into timebucket values (14, now(),now(),now(),now());
    

    and lets see what that looks like

    Here you can see from the tenth of a second options where you hit right on a second then a digit will disappear.

    Now we can do ranking on these to determine position.

    Select pkid, 
    time1secondonly, 
    rank() over wn as rank from timebucket
    window wn as (order by time1secondonly)
    order by time1secondonly;

    This results in

    So lets change this to rank the next column along.

    Select pkid, 
    time2tenthsecond, 
    rank() over wn as rank from timebucket 
    window wn as (order by time2tenthsecond) 
    order by time2tenthsecond;

    Appears to be working but lets try the other columns.

    Select pkid, 
    time3hundredthsecond, 
    rank() over wn as rank from timebucket 
    window wn as (order by time3hundredthsecond) 
    order by time3hundredthsecond;

    Appears correct but for good measure thousandths of a second.

    Select pkid, 
    time4timethousandthsecond, 
    rank() over wn as rank from timebucket 
    window wn as (order by time4timethousandthsecond) 
    order by time4timethousandthsecond;

    And now lets add an interval column

    Alter table timebucket add column timeinterval time(0);
    

    But lets add a further time5 column that and update to now time so we can create some intervals

    Alter table timebucket add column time5 timestamp(0);
    Update timebucket set time5 = now();
    

    Now if we want to get the time between items we can make the following SQL

    Select pkid, 
    time5, 
    time1secondonly,
    time5-time1secondonly as tinterval 
    from timebucket;
    

    And we get

    Lets try with a different time column

    Select pkid, 
    time5, 
    time4timethousandthsecond,
    time5- time4timethousandthsecond as tinterval 
    from timebucket;

    So next I reduce pkid record 14 by a day and re run to see what happens.

    Update timebucket set time4timethousandthsecond='2019-12-04' where pkid=14;
    

    and run the former select again;

    Select pkid, 
    time5, 
    time4timethousandthsecond,
    time5- time4timethousandthsecond as tinterval 
    from timebucket;

    and we see the interval is correctly recording.

    Now if we want to rank on tinterval I was unable to do it directly from a query so I went ahead and updated the former timeinterval column as follows

    update timebucket set timeinterval=time5-time4timethousandthsecond;

    and now doing a select on this we get

    select pkid, timeinterval from timebucket;

    What we see is

    But we are not showing the fact that 14 should be 1 day this is because we should have defined timeinterval as an interval variable rather than a time(0) variable.

    So we can do this as follows and update appropriately.

    Alter table timebucket add column timeinterval2 interval;
    update timebucket set timeinterval2=time5-time4timethousandthsecond;
    select pkid, timeinterval2 from timebucket;

    And we get the right result

    And now lets rank these to check it is sorting them correctly.

    Select pkid, 
    time4timethousandthsecond, 
    timeinterval2, 
    rank() over wn as rank from timebucket 
    window wn as (order by timeinterval2) 
    order by rank;

    And we get the correct result

    022 Postgres – Setting up starting variables in psqlrc

    So how do we adjust the defaults for the command line prompt in psql Postgres

    Set up your psqlrc defaults

    Go to the command prompt and navigate to the following directory

    c:\Users\Mark\AppData\Roaming\Postgresql\

    and either find or create a file called

    psqlrc.conf

    This is a simple text file that you can edit or create in notepad.

    --------------------------
    --psqlrc set preferences--
    -- Author Mark Brooks --
    --------------------------
    
    \set QUIET 1
    
    \x auto
    \timing
    \set COMP_KEYWORD_CASE upper
    
    \pset border 2
    \pset pager off
    \pset null <NULL>
    \setenv editor 'C:\\Program Files (x86)\\Notepad++\\notepad++.exe'
    \set VERBOSITY verbose
    
    \set QUIET 0
    
    \echo 'Welcome to PostgreSQL \n'
    \echo 'Type :version to see PostgreSQL version \n'
    \echo 'Type :extensions to see the available extensions'
    
    \set version 'SELECT version();'
    \set extensions 'select * from pg_available_extensions;'

    This allows you for instance to set up which editor will appear when you perform the \e command

    021 Postgres with PostGIS plugin – Create junction table sites in catchments

    This is a working example of how it would be possible to set up code that would automatically allocate for example housing sites to School Catchment areas. It could also be used to list constraints relevant to particular housing sites. This is more complicated in that it will indicate what percentage of the housing site is within which catchment areas and therefore allows for a single housing site being in multiple catchment areas. I am planning on coming back and expanding on this post. In this respect it represents a refinement of this Post

    So we need two tables

    t001asites which has a geometry field called geom
    and another table which will be the catchments table called
    t002bcatchments which has a geometry field called geom.

    Both tables must have a serial primary key of pkid and both tables must be polygon data and the geom field MUST be defined as polygon and NOT multipolygon.

    Air code is as follows.

      1. Create table containing digitised polygons of housing sites.
      2. Create table containing digitised polygons of catchments.
      3. Measure the area of the housing sites and place that value in an area column within the housing sites table t001asites.
      4. Split the housing sites by the catchment boundaries ensuing that each split polygon inherits the catchment it was split by.
      5. Re-measure the areas of these split sites and add an area column to store the new calculations.
      6. Divide figure obtained in 5. by figure obtained in 3 which will indicate the proportion of the housing site is in which catchment.
      7. Perform a least remainder method on the individual sites grouped by their original housing sites to ensure the proportions sum to 1.

    So to the code

    BEGIN;
    SET LOCAL check_function_bodies TO FALSE;
    CREATE OR REPLACE FUNCTION part01catchjunctionmaker() returns void as $$
    Alter table t001asites add column area integer;
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part02catchjunctionmaker() returns void as $$
    Update t001asites set area=ST_Area(geom);
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part022catchjunctionmaker() RETURNS void AS $$
    DROP TABLE IF EXISTS t200;
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part03catchjunctionmaker() RETURNS void AS $$
    CREATE TABLE t200 AS select a.pkid as t001pkid, b.pkid as t002pkid, a.area as t001area, ST_intersection(a.geom, b.geom) as geom FROM t001asites a, t002bcatchments b;
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part04catchjunctionmaker() RETURNS void AS $$
    ALTER TABLE t200 add column pkid serial primary key, add column area integer,add column proportion decimal (10,9);
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part06catchjunctionmaker() RETURNS void AS $$
    UPDATE t200 SET area=ST_Area(geom);
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part07catchjunctionmaker() RETURNS void AS $$
    DELETE from t200 where area=0 or null;
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part08catchjunctionmaker() RETURNS void AS $$
    UPDATE t200 SET proportion= cast(area as decimal)/cast(t001area as decimal) WHERE area > 0;
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part088catchjunctionmaker() RETURNS void AS $$
    DROP table IF EXISTS t201;
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part09catchjunctionmaker() RETURNS void AS $$
    Create table t201 as Select pkid,t001pkid,t002pkid, t001area, area, proportion, sum(proportion) OVER (PARTITION BY t001pkid ORDER BY t001pkid, proportion) as cum_proportion FROM t200 ORDER BY t001pkid, proportion;
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part10catchjunctionmaker() RETURNS void AS $$
    Alter table t201 add column value decimal (14,9),
    Add column valuerounded integer,
    Add column cumulvaluerounded integer,
    Add column prevbaseline integer,
    Add column roundproportion integer;
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part11catchjunctionmaker() RETURNS void AS $$
    UPDATE t201 set value = proportion * 100;
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part12catchjunctionmaker() RETURNS void AS $$
    UPDATE t201 set valuerounded = round(value,0);
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part13catchjunctionmaker() RETURNS void AS $$
    update t201 set cumulvaluerounded = round((cum_proportion*100),0);
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part14catchjunctionmaker() RETURNS void AS $$
    update t201 set cumulvaluerounded=100 where cumulvaluerounded = 101;
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part15catchjunctionmaker() RETURNS void AS $$
    update t201 set prevbaseline = round((cum_proportion - proportion)*100);
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part16catchjunctionmaker() RETURNS void AS $$
    update t201 set roundproportion = (cumulvaluerounded-prevbaseline);
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part17catchjunctionmaker() RETURNS void AS $$
    DELETE from t201 where roundproportion=0 or null;
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part18catchjunctionmaker() RETURNS void AS $$
    alter table t201 add column proppercent decimal(3,2);
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part19catchjunctionmaker() RETURNS void AS $$
    update t201 set proppercent = cast(roundproportion as decimal)/100;
    $$ LANGUAGE SQL;
    COMMIT;
    

    and now a function to pull it all together;

    CREATE OR REPLACE FUNCTION createcjt()
    RETURNS TEXT AS
    $BODY$
    BEGIN
    PERFORM part01catchjunctionmaker();
    PERFORM part02catchjunctionmaker();
    PERFORM part022catchjunctionmaker();
    PERFORM part03catchjunctionmaker();
    PERFORM part04catchjunctionmaker();
    PERFORM part06catchjunctionmaker();
    PERFORM part07catchjunctionmaker();
    PERFORM part08catchjunctionmaker();
    PERFORM part088catchjunctionmaker();
    PERFORM part09catchjunctionmaker();
    PERFORM part10catchjunctionmaker();
    PERFORM part11catchjunctionmaker();
    PERFORM part12catchjunctionmaker();
    PERFORM part13catchjunctionmaker();
    PERFORM part14catchjunctionmaker();
    PERFORM part15catchjunctionmaker();
    PERFORM part16catchjunctionmaker();
    PERFORM part17catchjunctionmaker();
    PERFORM part18catchjunctionmaker();
    PERFORM part19catchjunctionmaker();
    RETURN 'process end';
    END;
    $BODY$
    LANGUAGE plpgsql;
    

    MS SQL Azure – Creating contained users – SQL Authentication – DACPAC and BACPAC import

    In every database engine it is important to create logins that enforce security around your database and that can be maintained.
    Additionally if you are working for a client you may wish to transfer this database at some point in the future to the client.

    In SQL Azure users can be created against the master database in the instance and the role can then be transferred to individual databases.

    Fine but there may be circumstances where you want to isolate roles to individual databases so that when they are moved the roles move with them and are not left in the master database.
    The following sets out an example of how to set up a Contained database in SQL Azure along with something extra you have to think about when re-importing to an SQL Server instance.

    Using your sysadmin account connect to the database you wish to add a user to and run;

    CREATE USER rocketengineapplication WITH PASSWORD = 'Bluedanube99';
    ALTER ROLE db_owner ADD MEMBER rocketengineapplication;
    

    Note SQL Azure requires passwords to be ‘sufficiently complicated’ at the time of writing this seemed to be
    The default Azure password complexity rules: minimum length of 8 characters, minimum of 1 uppercase character, minimum of 1 lowercase character, minimum of 1 number.

    And to drop the login
    Go in through SSMS

    Security / Users / the users should be listed where you can select and choose DELETE

    Now developers could use this password and username to login to the database and do most of what is required without having any privileges to the SQL Server and if you ever transfer the database the role will pass with the database.

    Here is a link to built in database roles
    SQL Database Roles

    Secure a single or pooled database in SQL Azure

    and here is a useful query that can be run to identify the users and roles that a particular database has. This allows you to check what users are on a database and what are there roles.

    SELECT u.name AS UserName, u.type_desc AS UserType, r.name AS RoleName
    FROM sys.database_principals AS u
    LEFT JOIN sys.database_role_members AS rm ON rm.member_principal_id = u.principal_id
    LEFT JOIN sys.database_principals AS r ON r.principal_id = rm.role_principal_id
    WHERE
        u.type NOT IN('R', 'G')
    ORDER BY
          UserName
        , RoleName;
    

    Note that when deploying or importing data tier applications to for instance SQL Express versions by default contained database authentication is deactivated and must be activated.

    To do this connect to the local sql express instance and highlight the Databases on the left hand side then run the following code

    sp_configure 'contained database authentication', 1;
    GO
    RECONFIGURE;
    GO
    

    DACPAC (structure only) and BACPAC (data and structure) import should now be possible locally!

    This will specifically assist with the following error message which I was getting when I tried to import the database back into a local machine.

    TITLE: Microsoft SQL Server Management Studio
    ------------------------------
    
    Could not deploy package.
    Error SQL72014: .Net SqlClient Data Provider: Msg 12824, Level 16, State 1, Line 5 The sp_configure value 'contained database authentication' must be set to 1 in order to alter a contained database. You may need to use RECONFIGURE to set the value_in_use.
    Error SQL72045: Script execution error. The executed script:
    IF EXISTS (SELECT 1
    FROM [master].[dbo].[sysdatabases]
    WHERE [name] = N'$(DatabaseName)')
    BEGIN
    ALTER DATABASE [$(DatabaseName)]
    SET FILESTREAM(NON_TRANSACTED_ACCESS = OFF),
    CONTAINMENT = PARTIAL
    WITH ROLLBACK IMMEDIATE;
    END
    
    Error SQL72014: .Net SqlClient Data Provider: Msg 5069, Level 16, State 1, Line 5 ALTER DATABASE statement failed.
    Error SQL72045: Script execution error. The executed script:
    IF EXISTS (SELECT 1
    FROM [master].[dbo].[sysdatabases]
    WHERE [name] = N'$(DatabaseName)')
    BEGIN
    ALTER DATABASE [$(DatabaseName)]
    SET FILESTREAM(NON_TRANSACTED_ACCESS = OFF),
    CONTAINMENT = PARTIAL
    WITH ROLLBACK IMMEDIATE;
    END
    
    (Microsoft.SqlServer.Dac)
    

    I note that for local SQL Express and I believe enterprise SQL Server there is the additional option of creating a Backup in SQL Express. I also note that from my reading baks are considered preferable to bacpac files because they have enforced ACID compliance and that for large databases that are constantly being used they are recommended to bacpac files. SQL Azure doesn’t allow BAK file backups through SSMS from what I can see so if this is an issue for you consider temporarily disconnecting front ends from the database while a bacpac is conducted. If you need a bak file for some reason you can attach locally to a SQL Server instance and from there take a bak file.

    Something to be aware of..

    MS SQL Azure – Computed Columns

    It can be really nice to create a computed column and add it to the table rather than adding it in a field

    This would work well using the function listed in the previous post where I automatically calculate the age of trees.

    Add Computed Column to SQL Azure Table

    ALTER TABLE dbo.t001trees ADD treeage AS (dbo.functionyearmonthday(dbo.t001trees.plantdate, GETDATE()));
    

    This will appear in the table and look like it is an actual field but it is calculated and will not keep the figures in the table unless you specify persistance

    see the above link for further reading on this topic

    MS SQL Azure – TSQL to name the age between dates in text

    It is relatively easy to calculate the number of either years, months days hours or seconds between two dates using the native DATEDIFF built in function which comes with SQL.

    e.g.

    SELECT dbo.t001trees.pkid, 
    dbo.t001trees.plantdate, 
    DATEDIFF(Year, dbo.t001trees.plantdate, GETDATE()) as treeage 
    from dbo.t001trees;
    

    But here is a function that will spell it out into a string that reads something like
    2 days
    1 month 2 days
    2 years 1 month 2 days

    CREATE OR ALTER FUNCTION dbo.functionyearmonthday
    (
    @datefrom Date,
    @dateto Date
    )
    RETURNS varchar(100)
    as
    BEGIN
    DECLARE @date1 DATETIME, @date2 DATETIME, @result VARCHAR(100);
    DECLARE @years INT, @months INT, @days INT;
    
    SET @date1 = @datefrom
    SET @date2 = @dateto
    
    SELECT @years = DATEDIFF(yy, @date1, @date2)
    IF DATEADD(yy, -@years, @date2) < @date1
    SELECT @years = @years-1
    SET @date2 = DATEADD(yy, -@years, @date2)
    
    SELECT @months = DATEDIFF(mm, @date1, @date2)
    IF DATEADD(mm, -@months, @date2) < @date1
    SELECT @months=@months-1
    SET @date2= DATEADD(mm, -@months, @date2)
    
    SELECT @days=DATEDIFF(dd, @date1, @date2)
    IF DATEADD(dd, -@days, @date2) < @date1
    SELECT @days=@days-1
    SET @date2= DATEADD(dd, -@days, @date2)
    
    SELECT @result= ISNULL(CAST(NULLIF(@years,0) AS VARCHAR(10)) + ' years ','')
    + ISNULL(' ' + CAST(NULLIF(@months,0) AS VARCHAR(10)) + ' months ','')
    + ISNULL(' ' + CAST(NULLIF(@days,0) AS VARCHAR(10)) + ' days','')
    
    RETURN @result;
    END
    

    And if you would like to call the function from another query here is an example

    SELECT dbo.functionyearmonthday(dbo.t001trees.plantdate, GETDATE()) as treeage FROM dbo.t001trees

    That is enough for most people but it can be expanded to include hours minutes seconds and milliseconds which could be useful if you need more precision it can be seen that the DATEDIFF native function is included extensively within this function.

    CREATE OR ALTER FUNCTION dbo.functiontimeperiodmoreprecision
    (
    @datefrom Date,
    @dateto Date
    )
    RETURNS varchar(100)
    as
    BEGIN
    DECLARE @date1 DATETIME, @date2 DATETIME, @result VARCHAR(100);
    DECLARE @years INT, @months INT, @days INT,
    @hours INT, @minutes INT, @seconds INT, @milliseconds INT;
    
    SET @date1 = @datefrom
    SET @date2 = @dateto
    
    SELECT @years = DATEDIFF(yy, @date1, @date2)
    IF DATEADD(yy, -@years, @date2) < @date1
    SELECT @years = @years-1
    SET @date2 = DATEADD(yy, -@years, @date2)
    
    SELECT @months = DATEDIFF(mm, @date1, @date2)
    IF DATEADD(mm, -@months, @date2) < @date1
    SELECT @months=@months-1
    SET @date2= DATEADD(mm, -@months, @date2)
    
    SELECT @days=DATEDIFF(dd, @date1, @date2)
    IF DATEADD(dd, -@days, @date2) < @date1
    SELECT @days=@days-1
    SET @date2= DATEADD(dd, -@days, @date2)
    
    SELECT @hours=DATEDIFF(hh, @date1, @date2)
    IF DATEADD(hh, -@hours, @date2) < @date1
    SELECT @hours=@hours-1
    SET @date2= DATEADD(hh, -@hours, @date2)
    
    SELECT @minutes=DATEDIFF(mi, @date1, @date2)
    IF DATEADD(mi, -@minutes, @date2) < @date1
    SELECT @minutes=@minutes-1
    SET @date2= DATEADD(mi, -@minutes, @date2)
    
    SELECT @seconds=DATEDIFF(s, @date1, @date2)
    IF DATEADD(s, -@seconds, @date2) < @date1
    SELECT @seconds=@seconds-1
    SET @date2= DATEADD(s, -@seconds, @date2)
    
    SELECT @milliseconds=DATEDIFF(ms, @date1, @date2)
    
    SELECT @result= ISNULL(CAST(NULLIF(@years,0) AS VARCHAR(10)) + ' years,','')
    + ISNULL(' ' + CAST(NULLIF(@months,0) AS VARCHAR(10)) + ' months,','')
    + ISNULL(' ' + CAST(NULLIF(@days,0) AS VARCHAR(10)) + ' days,','')
    + ISNULL(' ' + CAST(NULLIF(@hours,0) AS VARCHAR(10)) + ' hours,','')
    + ISNULL(' ' + CAST(@minutes AS VARCHAR(10)) + ' minutes and','')
    + ISNULL(' ' + CAST(@seconds AS VARCHAR(10))
    + CASE
    WHEN @milliseconds > 0
    THEN '.' + CAST(@milliseconds AS VARCHAR(10))
    ELSE ''
    END
    + ' seconds','')
    
    RETURN @result
    END
    

    MS Access Function : Function to create SQL union queries

    Another small function that can speed up the text required to be written for large union queries.

    Typically this can be used with
    MS Access Function : Scan through a directory and write list of files to a table.

    There are a number of data providers that provide data files broken down into different geographical areas. In previous posts I have outlined how we can get these all into Postgis. But once they are in postgis (or any other database) you may wish to get these separate tables into one single global table. Clearly a union query will do this, however it can be time consuming to write the union query out as it simply has so many tables in it.

    I used the code in the link to scan a directory and get all the filenames (in this case shape files of the UK road network) into a table that I called UKRoadLinks which had two fields PKID (primary long integer autonumber) and Filen text field where Filen were the filenames.

    I then wrote the following function to write a text file that on completion will contain an sql union of all the tables listed in your recordset – I then copied and pasted this into Postgis database within which I had already imported all the sub tables to union the tables into a single copy. Alter the recordset source if for instance if you wish to use only a subset. The nice thing about this is if you have hundreds of tables to amalgamate there should be less likelyhood of you accidentally missing or misspelling table names.

    Public Function createunionsqllinks()
    
    Dim rst As DAO.Recordset
    Set rst = CurrentDb.OpenRecordset("UKRoadLinks")
    
    Dim fs, TextFile
    Set fs = CreateObject("Scripting.FileSystemObject")
    Set TextFile = fs.CreateTextFile("c:\data\sqlmerge.txt", True)
    TextFile.WriteLine ("CREATE TABLE sqltomergetables AS ")
    Do Until rst.EOF = True
    TextFile.WriteLine (Chr$(40) & "select * from " & rst!Filen & Chr$(41) & " UNION ")
    rst.MoveNext
    Loop
    rst.Close
    TextFile.WriteLine (";")
    TextFile.Close
    
    MsgBox "Created"
    
    End Function

    019 Postgres and Postgis: Load multiple shape files into a Postgis database

    Environment
    Windows 10
    Postgresql 11

    To start you need a set of shape files that you know the location of.

    For demonstration purposes I use the OS Open Road open data shape files.

    Link to OS Open Data

    Next within explorer I create a directory in the c:\ root called data.
    And then I create two subdirectories.
    1 – shp
    2 – sql

    Place all the shape files you wish to be loaded into a postgis database into the newly created c:\data\shp directory.

    Next we will make sql text files that can be used to transfer the information from shape format into Postgis. At this point it is not necessary to even know what database you are going to place the shape files into.

    Next open up the command prompt and navigate to the bin directory of your particular postgres installation. Standard as follows :-

    Next I copy in the below text.

    For %f in (C:\data\shp\*shp) do Shp2pgsql –s 27700 %f public.%~nf > C:\data\sql\%~nf.sql

    This will take all the shape files in the C:\data\shp\ directory and create sql text files placing them in the C:\data\sql\. After completion you may wish to go to the directory and check that creation has occurred. You can even look at the text within a single file preferably with some kind of code editor to check that it looks approximately correct.

    Next within psql I create the target database and apply the postgis extension.
    create database osopenroaddb;
    create extension postgis;

    Next, unless you have a lot of time on your hands you will want to go to the
    C:\Users\Mark\AppData\Roaming\postgresql
    and open up the pgpass.conf file and place the following line in it.

    localhost:5432:*:postgres:yoursecretpassword

    Without that line you will be asked for a password everytime a new sql file is run.

    Next back in command prompt bin directory do the following. Note no password it takes it from the pgpass.conf file.

    For %f in (c:\data\sql\*sql) do psql –h localhost –d osopenroaddb –U postgres –f %f > nul

    If you have a lot of big files this could take a while to run. But once finished all your shape files should now be in your postgis instance in the specified database.

    018 Postgres : Export Data and Structure of a Single database from a Postgres instance / Import Data and Structure of a Single database into a Postgres Instance

    Demonstration environment and programs
    Windows 10
    Postgres Version : 11.2
    QGIS desktop version : 3.4.4

    My working through of a process to export a single database (structure and data) from a Postgres Instance, the database has PostGIS and pgrouting extensions enabled, followed by importing into in this example the same instance but in principle could be a different instance.
    Access the command prompt (RUN AS ADMINISTRATOR)

    PLEASE NOTE run the command prompt as administrator or you will get frequently get an ACCESS DENIED message after using pg dump command.

    Navigate to the directory of the PostgresVersion from which you wish to export the database. This will typically be the bin subdirectory of the version of your postgres ( here 11 ). You can ensure that pg_dump.exe is here if you do a dir on the directory to reveal alternatively you could reference the full path to pgdump and then pass the parameters to it subsequently.

    eg

    Next place in the parameters of the database what database you wish to export along with the name that you want to call the exported file and then hit return.

    pg_dump -U postgres -p 5432 edinburghrouting > c:\dbexport.pgsql

    Hitting return depending on the security of your instance you will be prompted for a password.

    Enter the password hit return

    When I do this on my home computer there is no return message but going into the C drive I can see that dbexport.pgsql now exists.

    Next we want to create a blank database this is required to import the data and structure into.
    This we do in psql signed in as a user with sufficient privelege.

    Now back in the command line running as administrator we can run the following.

    psql -U postgres importededinburghrouting < c:\dbexport.pgsql

    Pressing return depending on your security you should be asked for your password.

    Once this is done it goes through a process of recreating the structure of the database then importing all the data

    For me the first lines look like this

    and the last look like this

    Now looking at the instance as a whole we can see the imported database

    and here I am displaying geographical information through QGIS to get an idea of the data and ensure that it appears to be all correct.

      SUMMARY


    There are quite a lot of tutorials online on how to do this but most seem to skip over some details - I've tried to be as accurate as possible but depending on you setup there may be differences. Nonetheless this is an extremely important task to perform so worth practicing to get right.