GIT Hub Repository : getting started with online code storage

One of the principles of the 3 2 1 rule of backup storage is that you should have one backup copy off site. Great but for code this can be tricky as you constantly need to be changing so how do you set up a continuous pipeline to allow for changes in code. Now you could use something like Google Cloud which is excellent and is completely acceptable however I wanted to use GIT mainly because it is better adapted to actually editing code if needed be (important for removing security parameters in things like webconfig files) , has finer grained level of detail in terms of control and is considered a professional format for storing code and version code. Of course with greater flexibility comes great complexity. So here are my beginnings on using GITHub.

First things first register with GITHub

GIT Hub Website

I would then recommend that you download Git Bash – this is a local program that will allow you to better synch your files between your development machine and your online github repository. I found the following Video Tutorial excellent in getting up and running in particular in downloading and installing GITHub Bash and linking it to your GITHUB repository.

GIT Hub (Good Tutorial on starting out)

Now here I am going to setup my local repository’s in the root directory and simply call the directory git

c:/git

For my purposes I have started by create a repository in online GITHUB

The open GITBASH

You will be presented with the following window

The first tricky thing is to navigate the bash prompt
execute the following commands

cd /c
cd git

Next we are going to Clone a repository from out GITHUB web account to our local machine.
Cloning does two main things it creates a sub-directory in this case within the c:\git directory and then copies all the files down from the web repository.

The command to do this is
git clone (followed by the url of your repository obtained here)

and execute it something similar to the below should now appear – a new sub directory should appear on your local machine with all files within it. Congratulations you have just downloaded all these files you could do this with any open source project on GITHub

Next lets see how we can download from the website masters or alternatively upload from local.

You might be thinking well why do I want to upload from local when I see that you can cut and paste directly into the web browser and commit from there. Yes good but the browser user interface won’t allow you to transfer more than 100 files which for web applications is a killer.

Now the same thing can be done with whole directories so simply use explorer to upload the item into the local directory and then push from there.

OK uploading from LOCAL
As an example lets make a change to the readme file and try and get it to be accepted onto our online account.
Got to your new directory and edit the README.md file in your editor of choice with some kind of change.

eg

Now back in bash perform the following;

Navigate to the directory and if the file was already

git commit -m “Fourth commit” README.md

This has committed the change in that particular file to the local git repository – this is important without a commit git doesn’t know anything has changed. Next you want to commit the local to the master web version

Execute the following command
git push -u origin master

Now you can go to your web repository and you should see that README.md has changed

Now the same thing can be done with whole directories so simply use explorer to upload the item into the local directory and then push from there.

DOWNLOADING changes from GITHub Repository to local.
This time lets make an edit to GITHUB README.md through the web repository commit it

git pull origin master

And we see that local has now been updated with the web repository

But what if we need to add a complete directory or a file to local and then upload it.

Here we first need to add the directory or file to local

For example create a new file in the local directory called MyNewFile.txt

git add MyNewFile.txt

Before transfering this file to your github account you need to commit it first.

git commit -m “comment message” MyNewFile.txt

you then need to push this to your Git hub account

git push -u origin master

Going back to your GIT Hub repository on your account and refreshing the screen you should see the item.

The above processes should get you started with GITHUB if things start to go wrong with errors try the below.

Note it would appear that if you wish to overwrite a local file or directory and then add it to your remote repository you still have to go through the ADD procedure. (Initially I had assumed that because a directory might already be there you might not need to add it)

So if you were going to overwrite the output folder it would be a case of
git add -A
git commit -m "Output file overwritten"
git push origin master

REMOVING a repository and starting again..
Removing a repository from local which can be useful if like me you make changes to local and web version independently and it has issues merging the two and prevents the merge and starting out wasn’t sure how to fix everything maunally. I found the easiest way was to make sure you were happy with the online version. Delete the local copy and then clone to local again.

Firstly navigate to the repository directory you wish to delete and then

$ rm -rf .git

Now you can delete the directory and go forward but remember you will need to Clone the repository and start from there.

MS Access Function : Automate Normalisation or De-concatenate a field (MS Outlook export example)

I was contemplating a better way of storing our old emails and unhappy with some of the systems in place I started considering whether I could dump them into a database.

Thing is when you export from Outlook some of the standard fields in particular To and From are concatenated. Wouldn’t it be nice to separate those fields into their own table of names and addresses and reference them back to a main table of messages.

This is what I came up with.

For demonstrations purposes I will use two tables

t001parent
pkid - autonumber primary key
ccaddresses - memo or long text

and the child table

t002newchildren
pkid - autonumber primary key
ccaddress - string(150 should do it)
pkidt001 - number

and here is the blank second table

Next we create a user defined function

Public Function CreateChildTable()

    Dim db          As DAO.Database
    Dim rs          As DAO.Recordset
    Dim rsTarget    As DAO.Recordset

    Dim strSQL      As String
    Dim strField1   As String
    Dim strField2   As String
    Dim varData     As Variant
    Dim i           As Integer
    Dim intPKID     As Integer

    Set db = CurrentDb

    'Select all the ccaddresses from the parent table
    strSQL = "SELECT pkid,ccaddresses FROM t001parent"

    Set rsTarget = db.OpenRecordset("t002newchildren", dbOpenDynaset, dbAppendOnly)

    Set rs = db.OpenRecordset(strSQL, dbOpenDynaset)
    With rs
        While Not .EOF
            intPKID = !pkid
            varData = Split(!ccaddresses, ";") ' Get all semi colon delimited fields
               
            'add email addresses if there is only one email address there
            With rsTarget
            .AddNew
            !ccaddresss = Trim(varData(0)) ' remove spaces before writing new fields
            !pkidt001 = intPKID
            .Update
            End With
            

            'loop through addtional email addresses and add them as children to table
            For i = 1 To UBound(varData)
                With rsTarget
                    .AddNew
                    !ccaddresss = Trim(varData(i)) ' remove spaces before writing new fields
                    !pkidt001 = intPKID
                    .Update
                End With
            Next
            .MoveNext
        Wend

        .Close
        rsTarget.Close

    End With

    Set rsTarget = Nothing
    Set rs = Nothing
    db.Close
    Set db = Nothing

End Function

After running this we should

We’ve just nicely split the parent table ccaddesses field into constituent emails and normalised it as a child of the parent table.

MS SQL Azure – Conditional Computed Column Definition Using CASE

Setting aside for the moment the rights and wrongs of using persisted columns sometimes it is just a great way to add some automation to your database and make it clearer for the user.

But what if you want to add a conditional computed column for rows based on another value within that row. Here’s something I worked out.

ALTER TABLE ProjectManagement
ADD FutureorPast AS CAST
(
CASE
WHEN TargetDate > GetDate() or TargetDate is Null THEN 'FUTURE' WHEN TargetDate <= GetDate() THEN 'PAST'
ELSE ''
END as nvarchar(6)
)
GO

Nubuilder Forte – How to? Install on Windows – VIDEO INSTRUCTION

In a previous post I had mentioned that if you wanted a platform focused on database driven (MYSQL only) web application development, Nubuilder FORTE was an excellent choice.

On visiting their site the other day I noticed that they had published a new video out which describes how to install it on Windows. Configuration of Nubuilder is the hardest part and although I haven’t done it this video looks like an excellent instruction. So get yourself across there and have a go.

That link is here

  • How to Install Nubuilder on Windows

    Other links

  • Nubuilder main website
  • Nubuilder FORUM
  • MS SQL Azure – CONCAT_WS Working with addresses and nicely formatting separated fields

    I recently came across a very useful function that I am told was introduced in SQL Server 2017 called CONCAT_WS
    I have never used it in any other application other than SQL Server I hope it exists in MYSQL and PostgreSQL

    This will join together a series of fields with chosen separators and by combining it with NULLIF can be used to remove spaces and format address nicely.
    For applications that will at some point need addresses either for post or for information this function allows the display of addresses in a format the most clearly reflects the requirements of most postal systems. Needless to say it is likely that most of my systems will at some point transition to the use of this function.

    Here is a link to Microsofts documentation on the function CONCAT_WS

    Firstly let me review what I am starting to standardise on with regard to address fields which should be a balance between enough detail to store any address on the planet but not so much that it is overly complicated. I’ve expanded their descriptions somewhat. In my experience someone will have created an excel spreadsheet to start the process of recording information, often they standardise on column names such as address01/02/03 etc. If that breakdown exists then I have indicated the fields that I would normally expect to map those fields too.

    flatno-buildingdivision
    houseno-buildno
    housename-buildname
    streetfirst-address01
    streetsecond
    locality-address02
    towncity-address03
    stateoradminlocality
    postcode-address04
    country-address05

    And here is an example of the code implemented.

    CREATE VIEW v001formattedaddress AS SELECT pkid,
    email,
    CONCAT_WS(' ',NULLIF(dbo.t001addresstable.firstname,' '), NULLIF(dbo.t001addresstable.surname,' ')) AS fullname,
    CONCAT_WS(CHAR(13) + CHAR(10),
    NULLIF(dbo.t001addresstable.flatno, ' '), 
    NULLIF(dbo.t001addresstable.houseno,' '), 
    NULLIF(dbo.t001addresstable.housename,' '),
    NULLIF(dbo.t001addresstable.streetfirst,' '),
    NULLIF(dbo.t001addresstable.streetsecond,' '),
    NULLIF(dbo.t001addresstable.locality,' '),
    NULLIF(dbo.t001addresstable.towncity,' '),
    NULLIF(dbo.t001addresstable.stateoradminlocality,' '),
    NULLIF(dbo.t001addresstable.postcode,' '),
    NULLIF(dbo.t001addresstable.country,' '),
    REPLICATE('-', 30) ) AS addressconcat FROM dbo.t001addresstable;
    

    Additionally on reflection for a recent project I made up a list of countries that covers most in the world. For my project I put an include field next to them to allow system administrators to include whether these would be visible in the drop down. Clearly while overtime more and more countries may be added there I would expect it to be years or possibly decades before some of the values of the smaller nations are needed. (For my particular application anyway)

    I standardised on the following the 2 digit codes are the ISO Country code standard

    AD Andorra
    AE United Arab Emirates
    AF Afghanistan
    AG Antigua and Barbuda
    AI Anguilla
    AL Albania
    AM Armenia
    AO Angola
    AQ Antarctica
    AR Argentina
    AT Austria
    AU Australia
    AW Aruba
    AX Aland Islands
    AZ Azerbaijan
    BA Bosnia and Herzegovina
    BB Barbados
    BD Bangladesh
    BE Belgium
    BF Burkina Faso
    BG Bulgaria
    BH Bahrain
    BI Burundi
    BJ Benin
    BL Saint Barthélemy
    BM Bermuda
    BN Brunei Darussalam
    BO Bolivia, Plurinational State of
    BQ Bonaire, Sint Eustatius and Saba
    BR Brazil
    BS Bahamas
    BT Bhutan
    BV Bouvet Island
    BW Botswana
    BY Belarus
    BZ Belize
    CA Canada
    CC Cocos (Keeling) Islands
    CD Congo, the Democratic Republic of the
    CF Central African Republic
    CG Congo
    CH Switzerland
    CI Cote d’Ivoire
    CK Cook Islands
    CL Chile
    CM Cameroon
    CN China*
    CO Colombia
    CR Costa Rica
    CU Cuba
    CV Cape Verde
    CW Curaçao
    CX Christmas Island
    CY Cyprus
    CZ Czech Republic
    DE Germany
    DJ Djibouti
    DK Denmark
    DM Dominica
    DO Dominican Republic
    DZ Algeria
    EC Ecuador
    EE Estonia
    EG Egypt
    EH Western Sahara
    ER Eritrea
    ES Spain
    ET Ethiopia
    FI Finland
    FJ Fiji
    FK Falkland Islands (Malvinas)
    FO Faroe Islands
    FR France
    GA Gabon
    GB United Kingdom
    GB United Kingdom Northern Ireland
    GD Grenada
    GE Georgia
    GF French Guiana
    GG Guernsey
    GH Ghana
    GI Gibraltar
    GL Greenland
    GM Gambia
    GN Guinea
    GP Guadeloupe
    GQ Equatorial Guinea
    GR Greece
    GS South Georgia and the South Sandwich Islands
    GT Guatemala
    GW Guinea-Bissau
    GY Guyana
    HK Hong Kong SAR China
    HM Heard Island and McDonald Islands
    HN Honduras
    HR Croatia
    HT Haiti
    HU Hungary
    ID Indonesia
    IC Spain Canary Islands
    IE Ireland Republic
    IL Israel
    IM Isle of Man
    IN India
    IO British Indian Ocean Territory
    IQ Iraq
    IR Iran, Islamic Republic of
    IS Iceland
    IT Italy
    JE Jersey
    JM Jamaica
    JO Jordan
    JP Japan
    KE Kenya
    KG Kyrgyzstan
    KH Cambodia
    KI Kiribati
    KM Comoros
    KN Saint Kitts and Nevis
    KP Korea, Democratic People’s Republic of
    KR Korea, Republic of
    KW Kuwait
    KY Cayman Islands
    KZ Kazakhstan
    LA Lao People’s Democratic Republic
    LB Lebanon
    LC Saint Lucia
    LI Liechtenstein
    LK Sri Lanka
    LR Liberia
    LS Lesotho
    LT Lithuania
    LU Luxembourg
    LV Latvia
    LY Libyan Arab Jamahiriya
    MA Morocco
    MC Monaco
    MD Moldova, Republic of
    ME Montenegro
    MF Saint Martin (French part)
    MG Madagascar
    MK Macedonia, the former Yugoslav Republic of
    ML Mali
    MM Myanmar
    MN Mongolia
    MO Macau SAR China
    MQ Martinique
    MR Mauritania
    MS Montserrat
    MT Malta
    MU Mauritius
    MV Maldives
    MW Malawi
    MX Mexico
    MY Malaysia
    MZ Mozambique
    NA Namibia
    NC New Caledonia
    NE Niger
    NF Norfolk Island
    NG Nigeria
    NI Nicaragua
    NL Netherlands
    NO Norway
    NP Nepal
    NR Nauru
    NU Niue
    NZ New Zealand
    OM Oman
    PA Panama
    PE Peru
    PF French Polynesia
    PG Papua New Guinea
    PH Philippines
    PK Pakistan
    PL Poland
    PM Saint Pierre and Miquelon
    PN Pitcairn
    PS Palestine
    PT Portugal
    PY Paraguay
    QA Qatar
    RE Reunion
    RO Romania
    RS Serbia
    RU Russian Federation
    RW Rwanda
    SA Saudi Arabia
    SB Solomon Islands
    SC Seychelles
    SD Sudan
    SE Sweden
    SG Singapore
    SH Saint Helena, Ascension and Tristan da Cunha
    SI Slovenia
    SJ Svalbard and Jan Mayen
    SK Slovakia
    SL Sierra Leone
    SM San Marino
    SN Senegal
    SO Somalia
    SR Suriname
    SS South Sudan
    ST Sao Tome and Principe
    SV El Salvador
    SX Sint Maarten (Dutch part)
    SY Syrian Arab Republic
    SZ Swaziland
    TC Turks and Caicos Islands
    TD Chad
    TF French Southern Territories
    TG Togo
    TH Thailand
    TJ Tajikistan
    TK Tokelau
    TL Timor-Leste
    TM Turkmenistan
    TN Tunisia
    TO Tonga
    TR Turkey
    TT Trinidad and Tobago
    TV Tuvalu
    TW Taiwan
    TZ Tanzania United Republic of
    UA Ukraine
    UG Uganda
    US United States
    UY Uruguay
    UZ Uzbekistan
    VA Holy See (Vatican City State)
    VC Saint Vincent and the Grenadines
    VE Venezuela Bolivarian Republic of
    VG Virgin Islands, British
    VN Vietnam
    VU Vanuatu
    WF Wallis and Futuna
    WS Samoa
    YE Yemen
    YT Mayotte
    ZA South Africa
    ZM Zambia
    ZW Zimbabwe
    

    023 Postgres – Ranking and the Timestamp variable

    An investigation of ranking / the timestamp variable the time variable and the interval variable.

    Hours minutes and seconds
    Hours minutes and tenths of seconds
    Hours minutes and hundredths of seconds
    Hours minutes and thousandths of seconds

    So to highlight the examples I will first create a databsae called timeexampledb

    CREATE database timeexampledb;

    Now lets connect to that database

    \c timeexampledb

    Now I create a table called timebucket that will hold examples of the different time formats.

    create table timebucket 
    (pkid serial primary key, 
    time1secondonly timestamp(0), 
    time2tenthsecond timestamp(1), 
    time3hundredthsecond timestamp(2), 
    time4timethousandthsecond timestamp(3));
    

    Next input some examples and see what we get.

    insert into timebucket values (1, now(),now(),now(),now());
    insert into timebucket values (2, now(),now(),now(),now());
    insert into timebucket values (3, now(),now(),now(),now());
    insert into timebucket values (4, now(),now(),now(),now());
    insert into timebucket values (5, now(),now(),now(),now());
    insert into timebucket values (6, now(),now(),now(),now());
    insert into timebucket values (7, now(),now(),now(),now());
    insert into timebucket values (8, now(),now(),now(),now());
    insert into timebucket values (9, now(),now(),now(),now());
    insert into timebucket values (10, now(),now(),now(),now());
    insert into timebucket values (11, now(),now(),now(),now());
    insert into timebucket values (12, now(),now(),now(),now());
    insert into timebucket values (14, now(),now(),now(),now());
    

    and lets see what that looks like

    Here you can see from the tenth of a second options where you hit right on a second then a digit will disappear.

    Now we can do ranking on these to determine position.

    Select pkid, 
    time1secondonly, 
    rank() over wn as rank from timebucket
    window wn as (order by time1secondonly)
    order by time1secondonly;

    This results in

    So lets change this to rank the next column along.

    Select pkid, 
    time2tenthsecond, 
    rank() over wn as rank from timebucket 
    window wn as (order by time2tenthsecond) 
    order by time2tenthsecond;

    Appears to be working but lets try the other columns.

    Select pkid, 
    time3hundredthsecond, 
    rank() over wn as rank from timebucket 
    window wn as (order by time3hundredthsecond) 
    order by time3hundredthsecond;

    Appears correct but for good measure thousandths of a second.

    Select pkid, 
    time4timethousandthsecond, 
    rank() over wn as rank from timebucket 
    window wn as (order by time4timethousandthsecond) 
    order by time4timethousandthsecond;

    And now lets add an interval column

    Alter table timebucket add column timeinterval time(0);
    

    But lets add a further time5 column that and update to now time so we can create some intervals

    Alter table timebucket add column time5 timestamp(0);
    Update timebucket set time5 = now();
    

    Now if we want to get the time between items we can make the following SQL

    Select pkid, 
    time5, 
    time1secondonly,
    time5-time1secondonly as tinterval 
    from timebucket;
    

    And we get

    Lets try with a different time column

    Select pkid, 
    time5, 
    time4timethousandthsecond,
    time5- time4timethousandthsecond as tinterval 
    from timebucket;

    So next I reduce pkid record 14 by a day and re run to see what happens.

    Update timebucket set time4timethousandthsecond='2019-12-04' where pkid=14;
    

    and run the former select again;

    Select pkid, 
    time5, 
    time4timethousandthsecond,
    time5- time4timethousandthsecond as tinterval 
    from timebucket;

    and we see the interval is correctly recording.

    Now if we want to rank on tinterval I was unable to do it directly from a query so I went ahead and updated the former timeinterval column as follows

    update timebucket set timeinterval=time5-time4timethousandthsecond;

    and now doing a select on this we get

    select pkid, timeinterval from timebucket;

    What we see is

    But we are not showing the fact that 14 should be 1 day this is because we should have defined timeinterval as an interval variable rather than a time(0) variable.

    So we can do this as follows and update appropriately.

    Alter table timebucket add column timeinterval2 interval;
    update timebucket set timeinterval2=time5-time4timethousandthsecond;
    select pkid, timeinterval2 from timebucket;

    And we get the right result

    And now lets rank these to check it is sorting them correctly.

    Select pkid, 
    time4timethousandthsecond, 
    timeinterval2, 
    rank() over wn as rank from timebucket 
    window wn as (order by timeinterval2) 
    order by rank;

    And we get the correct result

    022 Postgres – Setting up starting variables in psqlrc

    So how do we adjust the defaults for the command line prompt in psql Postgres

    Set up your psqlrc defaults

    Go to the command prompt and navigate to the following directory

    c:\Users\Mark\AppData\Roaming\Postgresql\

    and either find or create a file called

    psqlrc.conf

    This is a simple text file that you can edit or create in notepad.

    --------------------------
    --psqlrc set preferences--
    -- Author Mark Brooks --
    --------------------------
    
    \set QUIET 1
    
    \x auto
    \timing
    \set COMP_KEYWORD_CASE upper
    
    \pset border 2
    \pset pager off
    \pset null <NULL>
    \setenv editor 'C:\\Program Files (x86)\\Notepad++\\notepad++.exe'
    \set VERBOSITY verbose
    
    \set QUIET 0
    
    \echo 'Welcome to PostgreSQL \n'
    \echo 'Type :version to see PostgreSQL version \n'
    \echo 'Type :extensions to see the available extensions'
    
    \set version 'SELECT version();'
    \set extensions 'select * from pg_available_extensions;'

    This allows you for instance to set up which editor will appear when you perform the \e command

    021 Postgres with PostGIS plugin – Create junction table sites in catchments

    This is a working example of how it would be possible to set up code that would automatically allocate for example housing sites to School Catchment areas. It could also be used to list constraints relevant to particular housing sites. This is more complicated in that it will indicate what percentage of the housing site is within which catchment areas and therefore allows for a single housing site being in multiple catchment areas. I am planning on coming back and expanding on this post. In this respect it represents a refinement of this Post

    So we need two tables

    t001asites which has a geometry field called geom
    and another table which will be the catchments table called
    t002bcatchments which has a geometry field called geom.

    Both tables must have a serial primary key of pkid and both tables must be polygon data and the geom field MUST be defined as polygon and NOT multipolygon.

    Air code is as follows.

      1. Create table containing digitised polygons of housing sites.
      2. Create table containing digitised polygons of catchments.
      3. Measure the area of the housing sites and place that value in an area column within the housing sites table t001asites.
      4. Split the housing sites by the catchment boundaries ensuing that each split polygon inherits the catchment it was split by.
      5. Re-measure the areas of these split sites and add an area column to store the new calculations.
      6. Divide figure obtained in 5. by figure obtained in 3 which will indicate the proportion of the housing site is in which catchment.
      7. Perform a least remainder method on the individual sites grouped by their original housing sites to ensure the proportions sum to 1.

    So to the code

    BEGIN;
    SET LOCAL check_function_bodies TO FALSE;
    CREATE OR REPLACE FUNCTION part01catchjunctionmaker() returns void as $$
    Alter table t001asites add column area integer;
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part02catchjunctionmaker() returns void as $$
    Update t001asites set area=ST_Area(geom);
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part022catchjunctionmaker() RETURNS void AS $$
    DROP TABLE IF EXISTS t200;
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part03catchjunctionmaker() RETURNS void AS $$
    CREATE TABLE t200 AS select a.pkid as t001pkid, b.pkid as t002pkid, a.area as t001area, ST_intersection(a.geom, b.geom) as geom FROM t001asites a, t002bcatchments b;
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part04catchjunctionmaker() RETURNS void AS $$
    ALTER TABLE t200 add column pkid serial primary key, add column area integer,add column proportion decimal (10,9);
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part06catchjunctionmaker() RETURNS void AS $$
    UPDATE t200 SET area=ST_Area(geom);
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part07catchjunctionmaker() RETURNS void AS $$
    DELETE from t200 where area=0 or null;
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part08catchjunctionmaker() RETURNS void AS $$
    UPDATE t200 SET proportion= cast(area as decimal)/cast(t001area as decimal) WHERE area > 0;
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part088catchjunctionmaker() RETURNS void AS $$
    DROP table IF EXISTS t201;
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part09catchjunctionmaker() RETURNS void AS $$
    Create table t201 as Select pkid,t001pkid,t002pkid, t001area, area, proportion, sum(proportion) OVER (PARTITION BY t001pkid ORDER BY t001pkid, proportion) as cum_proportion FROM t200 ORDER BY t001pkid, proportion;
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part10catchjunctionmaker() RETURNS void AS $$
    Alter table t201 add column value decimal (14,9),
    Add column valuerounded integer,
    Add column cumulvaluerounded integer,
    Add column prevbaseline integer,
    Add column roundproportion integer;
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part11catchjunctionmaker() RETURNS void AS $$
    UPDATE t201 set value = proportion * 100;
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part12catchjunctionmaker() RETURNS void AS $$
    UPDATE t201 set valuerounded = round(value,0);
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part13catchjunctionmaker() RETURNS void AS $$
    update t201 set cumulvaluerounded = round((cum_proportion*100),0);
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part14catchjunctionmaker() RETURNS void AS $$
    update t201 set cumulvaluerounded=100 where cumulvaluerounded = 101;
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part15catchjunctionmaker() RETURNS void AS $$
    update t201 set prevbaseline = round((cum_proportion - proportion)*100);
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part16catchjunctionmaker() RETURNS void AS $$
    update t201 set roundproportion = (cumulvaluerounded-prevbaseline);
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part17catchjunctionmaker() RETURNS void AS $$
    DELETE from t201 where roundproportion=0 or null;
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part18catchjunctionmaker() RETURNS void AS $$
    alter table t201 add column proppercent decimal(3,2);
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION part19catchjunctionmaker() RETURNS void AS $$
    update t201 set proppercent = cast(roundproportion as decimal)/100;
    $$ LANGUAGE SQL;
    COMMIT;
    

    and now a function to pull it all together;

    CREATE OR REPLACE FUNCTION createcjt()
    RETURNS TEXT AS
    $BODY$
    BEGIN
    PERFORM part01catchjunctionmaker();
    PERFORM part02catchjunctionmaker();
    PERFORM part022catchjunctionmaker();
    PERFORM part03catchjunctionmaker();
    PERFORM part04catchjunctionmaker();
    PERFORM part06catchjunctionmaker();
    PERFORM part07catchjunctionmaker();
    PERFORM part08catchjunctionmaker();
    PERFORM part088catchjunctionmaker();
    PERFORM part09catchjunctionmaker();
    PERFORM part10catchjunctionmaker();
    PERFORM part11catchjunctionmaker();
    PERFORM part12catchjunctionmaker();
    PERFORM part13catchjunctionmaker();
    PERFORM part14catchjunctionmaker();
    PERFORM part15catchjunctionmaker();
    PERFORM part16catchjunctionmaker();
    PERFORM part17catchjunctionmaker();
    PERFORM part18catchjunctionmaker();
    PERFORM part19catchjunctionmaker();
    RETURN 'process end';
    END;
    $BODY$
    LANGUAGE plpgsql;
    

    MS SQL Azure – Creating contained users – SQL Authentication – DACPAC and BACPAC import

    In every database engine it is important to create logins that enforce security around your database and that can be maintained.
    Additionally if you are working for a client you may wish to transfer this database at some point in the future to the client.

    In SQL Azure users can be created against the master database in the instance and the role can then be transferred to individual databases.

    Fine but there may be circumstances where you want to isolate roles to individual databases so that when they are moved the roles move with them and are not left in the master database.
    The following sets out an example of how to set up a Contained database in SQL Azure along with something extra you have to think about when re-importing to an SQL Server instance.

    Using your sysadmin account connect to the database you wish to add a user to and run;

    CREATE USER rocketengineapplication WITH PASSWORD = 'Bluedanube99';
    ALTER ROLE db_owner ADD MEMBER rocketengineapplication;
    

    Note SQL Azure requires passwords to be ‘sufficiently complicated’ at the time of writing this seemed to be
    The default Azure password complexity rules: minimum length of 8 characters, minimum of 1 uppercase character, minimum of 1 lowercase character, minimum of 1 number.

    And to drop the login
    Go in through SSMS

    Security / Users / the users should be listed where you can select and choose DELETE

    Now developers could use this password and username to login to the database and do most of what is required without having any privileges to the SQL Server and if you ever transfer the database the role will pass with the database.

    Here is a link to built in database roles
    SQL Database Roles

    Secure a single or pooled database in SQL Azure

    and here is a useful query that can be run to identify the users and roles that a particular database has. This allows you to check what users are on a database and what are there roles.

    SELECT u.name AS UserName, u.type_desc AS UserType, r.name AS RoleName
    FROM sys.database_principals AS u
    LEFT JOIN sys.database_role_members AS rm ON rm.member_principal_id = u.principal_id
    LEFT JOIN sys.database_principals AS r ON r.principal_id = rm.role_principal_id
    WHERE
        u.type NOT IN('R', 'G')
    ORDER BY
          UserName
        , RoleName;
    

    Note that when deploying or importing data tier applications to for instance SQL Express versions by default contained database authentication is deactivated and must be activated.

    To do this connect to the local sql express instance and highlight the Databases on the left hand side then run the following code

    sp_configure 'contained database authentication', 1;
    GO
    RECONFIGURE;
    GO
    

    DACPAC (structure only) and BACPAC (data and structure) import should now be possible locally!

    This will specifically assist with the following error message which I was getting when I tried to import the database back into a local machine.

    TITLE: Microsoft SQL Server Management Studio
    ------------------------------
    
    Could not deploy package.
    Error SQL72014: .Net SqlClient Data Provider: Msg 12824, Level 16, State 1, Line 5 The sp_configure value 'contained database authentication' must be set to 1 in order to alter a contained database. You may need to use RECONFIGURE to set the value_in_use.
    Error SQL72045: Script execution error. The executed script:
    IF EXISTS (SELECT 1
    FROM [master].[dbo].[sysdatabases]
    WHERE [name] = N'$(DatabaseName)')
    BEGIN
    ALTER DATABASE [$(DatabaseName)]
    SET FILESTREAM(NON_TRANSACTED_ACCESS = OFF),
    CONTAINMENT = PARTIAL
    WITH ROLLBACK IMMEDIATE;
    END
    
    Error SQL72014: .Net SqlClient Data Provider: Msg 5069, Level 16, State 1, Line 5 ALTER DATABASE statement failed.
    Error SQL72045: Script execution error. The executed script:
    IF EXISTS (SELECT 1
    FROM [master].[dbo].[sysdatabases]
    WHERE [name] = N'$(DatabaseName)')
    BEGIN
    ALTER DATABASE [$(DatabaseName)]
    SET FILESTREAM(NON_TRANSACTED_ACCESS = OFF),
    CONTAINMENT = PARTIAL
    WITH ROLLBACK IMMEDIATE;
    END
    
    (Microsoft.SqlServer.Dac)
    

    I note that for local SQL Express and I believe enterprise SQL Server there is the additional option of creating a Backup in SQL Express. I also note that from my reading baks are considered preferable to bacpac files because they have enforced ACID compliance and that for large databases that are constantly being used they are recommended to bacpac files. SQL Azure doesn’t allow BAK file backups through SSMS from what I can see so if this is an issue for you consider temporarily disconnecting front ends from the database while a bacpac is conducted. If you need a bak file for some reason you can attach locally to a SQL Server instance and from there take a bak file.

    Something to be aware of..

    MS SQL Azure – Computed Columns

    It can be really nice to create a computed column and add it to the table rather than adding it in a field

    This would work well using the function listed in the previous post where I automatically calculate the age of trees.

    Add Computed Column to SQL Azure Table

    ALTER TABLE dbo.t001trees ADD treeage AS (dbo.functionyearmonthday(dbo.t001trees.plantdate, GETDATE()));
    

    This will appear in the table and look like it is an actual field but it is calculated and will not keep the figures in the table unless you specify persistance

    see the above link for further reading on this topic

    MS SQL Azure – TSQL to name the age between dates in text

    It is relatively easy to calculate the number of either years, months days hours or seconds between two dates using the native DATEDIFF built in function which comes with SQL.

    e.g.

    SELECT dbo.t001trees.pkid, 
    dbo.t001trees.plantdate, 
    DATEDIFF(Year, dbo.t001trees.plantdate, GETDATE()) as treeage 
    from dbo.t001trees;
    

    But here is a function that will spell it out into a string that reads something like
    2 days
    1 month 2 days
    2 years 1 month 2 days

    CREATE OR ALTER FUNCTION dbo.functionyearmonthday
    (
    @datefrom Date,
    @dateto Date
    )
    RETURNS varchar(100)
    as
    BEGIN
    DECLARE @date1 DATETIME, @date2 DATETIME, @result VARCHAR(100);
    DECLARE @years INT, @months INT, @days INT;
    
    SET @date1 = @datefrom
    SET @date2 = @dateto
    
    SELECT @years = DATEDIFF(yy, @date1, @date2)
    IF DATEADD(yy, -@years, @date2) < @date1
    SELECT @years = @years-1
    SET @date2 = DATEADD(yy, -@years, @date2)
    
    SELECT @months = DATEDIFF(mm, @date1, @date2)
    IF DATEADD(mm, -@months, @date2) < @date1
    SELECT @months=@months-1
    SET @date2= DATEADD(mm, -@months, @date2)
    
    SELECT @days=DATEDIFF(dd, @date1, @date2)
    IF DATEADD(dd, -@days, @date2) < @date1
    SELECT @days=@days-1
    SET @date2= DATEADD(dd, -@days, @date2)
    
    SELECT @result= ISNULL(CAST(NULLIF(@years,0) AS VARCHAR(10)) + ' years ','')
    + ISNULL(' ' + CAST(NULLIF(@months,0) AS VARCHAR(10)) + ' months ','')
    + ISNULL(' ' + CAST(NULLIF(@days,0) AS VARCHAR(10)) + ' days','')
    
    RETURN @result;
    END
    

    And if you would like to call the function from another query here is an example

    SELECT dbo.functionyearmonthday(dbo.t001trees.plantdate, GETDATE()) as treeage FROM dbo.t001trees

    That is enough for most people but it can be expanded to include hours minutes seconds and milliseconds which could be useful if you need more precision it can be seen that the DATEDIFF native function is included extensively within this function.

    CREATE OR ALTER FUNCTION dbo.functiontimeperiodmoreprecision
    (
    @datefrom Date,
    @dateto Date
    )
    RETURNS varchar(100)
    as
    BEGIN
    DECLARE @date1 DATETIME, @date2 DATETIME, @result VARCHAR(100);
    DECLARE @years INT, @months INT, @days INT,
    @hours INT, @minutes INT, @seconds INT, @milliseconds INT;
    
    SET @date1 = @datefrom
    SET @date2 = @dateto
    
    SELECT @years = DATEDIFF(yy, @date1, @date2)
    IF DATEADD(yy, -@years, @date2) < @date1
    SELECT @years = @years-1
    SET @date2 = DATEADD(yy, -@years, @date2)
    
    SELECT @months = DATEDIFF(mm, @date1, @date2)
    IF DATEADD(mm, -@months, @date2) < @date1
    SELECT @months=@months-1
    SET @date2= DATEADD(mm, -@months, @date2)
    
    SELECT @days=DATEDIFF(dd, @date1, @date2)
    IF DATEADD(dd, -@days, @date2) < @date1
    SELECT @days=@days-1
    SET @date2= DATEADD(dd, -@days, @date2)
    
    SELECT @hours=DATEDIFF(hh, @date1, @date2)
    IF DATEADD(hh, -@hours, @date2) < @date1
    SELECT @hours=@hours-1
    SET @date2= DATEADD(hh, -@hours, @date2)
    
    SELECT @minutes=DATEDIFF(mi, @date1, @date2)
    IF DATEADD(mi, -@minutes, @date2) < @date1
    SELECT @minutes=@minutes-1
    SET @date2= DATEADD(mi, -@minutes, @date2)
    
    SELECT @seconds=DATEDIFF(s, @date1, @date2)
    IF DATEADD(s, -@seconds, @date2) < @date1
    SELECT @seconds=@seconds-1
    SET @date2= DATEADD(s, -@seconds, @date2)
    
    SELECT @milliseconds=DATEDIFF(ms, @date1, @date2)
    
    SELECT @result= ISNULL(CAST(NULLIF(@years,0) AS VARCHAR(10)) + ' years,','')
    + ISNULL(' ' + CAST(NULLIF(@months,0) AS VARCHAR(10)) + ' months,','')
    + ISNULL(' ' + CAST(NULLIF(@days,0) AS VARCHAR(10)) + ' days,','')
    + ISNULL(' ' + CAST(NULLIF(@hours,0) AS VARCHAR(10)) + ' hours,','')
    + ISNULL(' ' + CAST(@minutes AS VARCHAR(10)) + ' minutes and','')
    + ISNULL(' ' + CAST(@seconds AS VARCHAR(10))
    + CASE
    WHEN @milliseconds > 0
    THEN '.' + CAST(@milliseconds AS VARCHAR(10))
    ELSE ''
    END
    + ' seconds','')
    
    RETURN @result
    END
    

    MS Access Function : Function to create SQL union queries

    Another small function that can speed up the text required to be written for large union queries.

    Typically this can be used with
    MS Access Function : Scan through a directory and write list of files to a table.

    There are a number of data providers that provide data files broken down into different geographical areas. In previous posts I have outlined how we can get these all into Postgis. But once they are in postgis (or any other database) you may wish to get these separate tables into one single global table. Clearly a union query will do this, however it can be time consuming to write the union query out as it simply has so many tables in it.

    I used the code in the link to scan a directory and get all the filenames (in this case shape files of the UK road network) into a table that I called UKRoadLinks which had two fields PKID (primary long integer autonumber) and Filen text field where Filen were the filenames.

    I then wrote the following function to write a text file that on completion will contain an sql union of all the tables listed in your recordset – I then copied and pasted this into Postgis database within which I had already imported all the sub tables to union the tables into a single copy. Alter the recordset source if for instance if you wish to use only a subset. The nice thing about this is if you have hundreds of tables to amalgamate there should be less likelyhood of you accidentally missing or misspelling table names.

    Public Function createunionsqllinks()
    
    Dim rst As DAO.Recordset
    Set rst = CurrentDb.OpenRecordset("UKRoadLinks")
    
    Dim fs, TextFile
    Set fs = CreateObject("Scripting.FileSystemObject")
    Set TextFile = fs.CreateTextFile("c:\data\sqlmerge.txt", True)
    TextFile.WriteLine ("CREATE TABLE sqltomergetables AS ")
    Do Until rst.EOF = True
    TextFile.WriteLine (Chr$(40) & "select * from " & rst!Filen & Chr$(41) & " UNION ")
    rst.MoveNext
    Loop
    rst.Close
    TextFile.WriteLine (";")
    TextFile.Close
    
    MsgBox "Created"
    
    End Function

    019 Postgres and Postgis: Load multiple shape files into a Postgis database

    Environment
    Windows 10
    Postgresql 11

    To start you need a set of shape files that you know the location of.

    For demonstration purposes I use the OS Open Road open data shape files.

    Link to OS Open Data

    Next within explorer I create a directory in the c:\ root called data.
    And then I create two subdirectories.
    1 – shp
    2 – sql

    Place all the shape files you wish to be loaded into a postgis database into the newly created c:\data\shp directory.

    Next we will make sql text files that can be used to transfer the information from shape format into Postgis. At this point it is not necessary to even know what database you are going to place the shape files into.

    Next open up the command prompt and navigate to the bin directory of your particular postgres installation. Standard as follows :-

    Next I copy in the below text.

    For %f in (C:\data\shp\*shp) do Shp2pgsql –s 27700 %f public.%~nf > C:\data\sql\%~nf.sql

    This will take all the shape files in the C:\data\shp\ directory and create sql text files placing them in the C:\data\sql\. After completion you may wish to go to the directory and check that creation has occurred. You can even look at the text within a single file preferably with some kind of code editor to check that it looks approximately correct.

    Next within psql I create the target database and apply the postgis extension.
    create database osopenroaddb;
    create extension postgis;

    Next, unless you have a lot of time on your hands you will want to go to the
    C:\Users\Mark\AppData\Roaming\postgresql
    and open up the pgpass.conf file and place the following line in it.

    localhost:5432:*:postgres:yoursecretpassword

    Without that line you will be asked for a password everytime a new sql file is run.

    Next back in command prompt bin directory do the following. Note no password it takes it from the pgpass.conf file.

    For %f in (c:\data\sql\*sql) do psql –h localhost –d osopenroaddb –U postgres –f %f > nul

    If you have a lot of big files this could take a while to run. But once finished all your shape files should now be in your postgis instance in the specified database.

    018 Postgres : Export Data and Structure of a Single database from a Postgres instance / Import Data and Structure of a Single database into a Postgres Instance

    Demonstration environment and programs
    Windows 10
    Postgres Version : 11.2
    QGIS desktop version : 3.4.4

    My working through of a process to export a single database (structure and data) from a Postgres Instance, the database has PostGIS and pgrouting extensions enabled, followed by importing into in this example the same instance but in principle could be a different instance.
    Access the command prompt (RUN AS ADMINISTRATOR)

    PLEASE NOTE run the command prompt as administrator or you will get frequently get an ACCESS DENIED message after using pg dump command.

    Navigate to the directory of the PostgresVersion from which you wish to export the database. This will typically be the bin subdirectory of the version of your postgres ( here 11 ). You can ensure that pg_dump.exe is here if you do a dir on the directory to reveal alternatively you could reference the full path to pgdump and then pass the parameters to it subsequently.

    eg

    Next place in the parameters of the database what database you wish to export along with the name that you want to call the exported file and then hit return.

    pg_dump -U postgres -p 5432 edinburghrouting > c:\dbexport.pgsql

    Hitting return depending on the security of your instance you will be prompted for a password.

    Enter the password hit return

    When I do this on my home computer there is no return message but going into the C drive I can see that dbexport.pgsql now exists.

    Next we want to create a blank database this is required to import the data and structure into.
    This we do in psql signed in as a user with sufficient privelege.

    Now back in the command line running as administrator we can run the following.

    psql -U postgres importededinburghrouting < c:\dbexport.pgsql

    Pressing return depending on your security you should be asked for your password.

    Once this is done it goes through a process of recreating the structure of the database then importing all the data

    For me the first lines look like this

    and the last look like this

    Now looking at the instance as a whole we can see the imported database

    and here I am displaying geographical information through QGIS to get an idea of the data and ensure that it appears to be all correct.

      SUMMARY


    There are quite a lot of tutorials online on how to do this but most seem to skip over some details - I've tried to be as accurate as possible but depending on you setup there may be differences. Nonetheless this is an extremely important task to perform so worth practicing to get right.

    MS Access Function : Print to excel spreadsheet field definitions of all tables in a database

    This places all tables and fields into an excel file on a single worksheet as a single table.

    Public Function TableDef()
    Dim def As TableDef
    Dim wb As Object
    Dim xL As Object
    Dim lngRow As Long
    Dim f As Field
    Set xL = CreateObject("Excel.Application")
    xL.Visible = True
    Set wb = xL.workbooks.Add
    lngRow = 2
    For Each def In CurrentDb.TableDefs
    For Each f In def.Fields
    With wb.sheets("Sheet1")
    .Range("A" & lngRow).Value = def.Name
    .Range("B" & lngRow).Value = f.Name
    .Range("C" & lngRow).Value = f.Type
    .Range("D" & lngRow).Value = f.Size
    .Range("E" & lngRow).Value = f.Required
    lngRow = lngRow + 1
    End With
    Next
    Next
    End Function
    

    MS Access Function : Loop through tables and export to csv

    A function that will loop through an access database and export all tables to csv and xls.

    Useful for subsequent import through QGIS into Postgres.

    Public Function ExportAll()
    Dim obj As AccessObject, dbs As Object
    Dim strFolder As String
    strFolder = "c:\"
    Set dbs = Application.CurrentData
    For Each obj In dbs.AllTables
    If Left(obj.Name, 4) <> "MSys" Then
    DoCmd.TransferText acExportDelim, , obj.Name, strFolder & obj.Name & ".csv", True
    DoCmd.TransferSpreadsheet acExport, acSpreadsheetTypeExcel9, obj.Name, strFolder & obj.Name & ".xls", True
    End If
    Next obj
    End Function

    QGIS and PostGIS : Identifying direction of a vector

    If using the dijkstra function with direction turned on it is important to identify the order in which the nodes of a vector line have been digitised. This is called the direction, dijkstra can use this with a reverse_cost attribute to handicap wrong movement along lines to such an extent that the correct path can be calculated around things like roundabouts.

    Here is an example of the roundabout in Straiton in Edinburgh just North of the A720 bypass. While some of the lines have a correct anti clockwise orienation clearly some have been incorrectly digitised.

    First we can see this by displaying the network in QGIS but using the styling to arrow the direction.

    The function that can be used to reverse such inaccuracies if you can’t resort to buying a correct dataset try ST_REVERSE

    017 Postgres command line : psql : Notices

    RAISE NOTICE can provide the same function as Message Box in VBA ie you can use it to comment on the progress of a script. RAISE NOTICE is not supported by SQL so you can’t place it in scripts containing SQL they need to be in plpgsql scripts. This isn’t too much of a hassle as the way I am working at the moment I am calling the SQL anyway from plpgsql so I can place my message boxes in there.

    No VBA Ok buttons.

    CREATE OR REPLACE FUNCTION noticeexample() returns void as $$
    BEGIN
    RAISE NOTICE 'ONE FINE DAY IN THE MIDDLE OF THE NIGHT';
    END;
    $$
    LANGUAGE PLPGSQL;

    016 Postgres command line : psql : Strip out the Z coordinate from a geometry field

    When creating a topology the geometry field cannot contain a Z coordinate.

    OK but the Ordnance Survey Open Data highways layers containse a Z coordinate. Previously I had stripped this out using the latest version of QGIS which has a tick box in the front end that allows for import stripping of the z coordinate in the process. If you don’t have access to the latest QGIS version how can you strip out the z coordinates.

    ST_FORCE2D

    ALTER TABLE public.nuroadlink ADD COLUMN geom2(multilinestring,27700);
    UPDATE public.nuroadlink SET geom2 = ST_FORCE2D(public.nuroadlink.geom);
    ALTER TABLE public.nuroadlink drop column geom;
    ALTER TABLE public.nuroadlink RENAME COLUMN geom2 TO geom;

    015 Postgres command line : psql : Create functions and then script those functions

    I had assumed after I had created a working SQL Script I would just be able to wrap the whole thing easily into a function and then bang it would be off to the races.
    My script really needed to be run in order and for some as yet undefined reason I was getting particular errors where a table would be created and then a following query would add or alter that table. It looked like the second query was trying to adapt the table prior to its creation with an inevitable error.

    I managed to get it working by making each SQL Query a function and then scripting the functions consecutively in a separate function using the PERFORM instruction.

    I incorporate into this the check_function_bodies switch which just allows the creation of sql referring to objects that may not be in existence yet.

    BEGIN;
    SET LOCAL check_function_bodies TO FALSE;
    CREATE OR REPLACE FUNCTION query01() returns void as $$
    CREATE TABLE t001start 
    (
    pkid serial primary key,
    geompkidt001 geometry(point,27700)
    );
    $$ LANGUAGE SQL;
    CREATE OR REPLACE FUNCTION query02() returns void as $$
    CREATE TABLE t002end 
    (
    pkid serial primary key,
    geompkidt002 geometry(point,27700)
    );
    $$ LANGUAGE SQL;
    COMMIT;

    And then subsequently I create a function that runs the functions.

    CREATE OR REPLACE FUNCTION runallthequeries() 
    returns text as
    $BODY$
    BEGIN
    PERFORM query01();
    PERFORM query02();
    RETURN 'process end';
    END;
    $BODY$
    LANGUAGE plpgsql;