Add POSTGIS extension to Azure Postgres Database – Worked Example

Firstly we need to allow for the extension on the server.
Within the Azure portal navigate to server parameters

Search for azure.extensions

Select which extensions to add
REMEMBER to save before exit!

Wait a short period while the extension is loaded onto the server..

Some points
Azure Database for PostgreSQL supports a subset of key PostgreSQL extensions as listed HERE. This information is also available by running SHOW azure.extensions;. Extensions not listed in this document aren’t supported on Azure Database for PostgreSQL – Flexible Server. You can’t create or load your own extension in Azure Database for PostgreSQL.

After extensions are allow-listed and loaded, these must be installed in your database before you can use them. To install a particular extension, you should run the CREATE EXTENSION command. This command loads the packaged objects into your database.

The extension(s) should now be listed against your database.

Postgres – Export a Spatial database and then Importing again (pg_dump + psql utilities)

Lets take this from the beginning. The first thing you need to do is be on the physical machine where the database is running.

Next check that the database server is up and running

To do that

Go to the windows bar click on the search icon and type in

Services

Scroll down to P and look for the name postgresql this and look to the Status column in the above example there are two postgres 11 and postgres 14 of the two only 14 shows as Running.

There are several Utility executables that come with every postgres installation.

The standard installation set up puts these into a directory equivalent to the following;

We need to navigate at the command prompt to the location of the executable we are interested in. Technically you could qualify the path to, the executable when running a command, but I tend to navigate to the location.

So firstly in the search button go to the command prompt

cmd

cd C:\Program Files\PostgreSQL\14\bin

pg_dump -U postgres -W -F p -d dbspatialhub > c:\sql\dbspatialhub.sql

pg_dump -U postgres -W -F p -d dbspatialhub > c:\sql\dbspatialhub.sql

and here is a string that works if your database is on the local server with a non standard port / substitute 5433 for your port

pg_dump -U postgres -h localhost -p 5433 -d dbsource > c:\sql\dbsourceexport.sql

run this command – you will note that I qualify the output directory here. I prefer to do this as I don’t particularly like writing and deleting files into the program directory. I prefer this directory to be reserved for standard files and reduce the likelihood of me in anyway accidentally deleting an important file.

You should be presented with the Password: option you will need to input the password for specified User (-U) complete

Success will be shown as a return to the former prompt with no message.

and running a DIR on the directory look for the sql file…

Next from either psql prompt or from within PGAdmin4 create a database you are going to import the sql into and ensure that the postgis extension is installed. Remember this is the psql prompt and not DOS command line prompt.

CREATE DATABASE dbspatialhubnew
CREATE EXTENSION postgis

Now you have to go back to the command prompt (and the correct directory).

If you are on a different port add the following string -p 5434

psql -U postgres -d dbspatialhubnew -f c:\sql\dbspatialhub.sql

After being asked for the password the import will start and continue showing a list of various commands and data being executed and imported.
Similar to below.

Now you can go back to a client such as QGIS , link to the database to start confirming a successful import.

Congratulations you have copied a database.

It should be noted that as it stands I am having problems importing the layers table which has the format of the boundaries for tables.
I will continue to work on this.

MS Azure & Web Applications – TSQL Create a check constraint on nvarchar column specifying a minimum length in DB and through the front end ensure enforce OWASP security

Double security!
Enforce length in database and then through the front end by validating in this case password values (PHPRunner example).

First the backend
You have a nvarchar(128) column in MS Azure SQL Database (or any Microsoft SQL Server database) and you wish to add a check constraint that does not allow a value in the column to be less than lets say 15 characters long.

tablename = users
field = userpassword

ALTER TABLE dbo.users
ADD CONSTRAINT [MinLengthConstraint] 
CHECK (DATALENGTH([userpassword]))>14;

And then a good way to check the constraints on an individual table using TSQL through SSMS for MS AZURE.

SELECT TABLE_NAME, 
CONSTRAINT_TYPE, 
CONSTRAINT_NAME 
FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS 
WHERE TABLE_NAME ='users';

Front End
Now for double security in your web application use a regular expression to prevent users entering a weak password.

^(?=.*\d)(?=.*[a-z])(?=.*[A-Z]).{15,128}$
^               # start of the string
(?=.*\d)        # a digit must occur at least once
(?=.*[a-z])     # a lower case letter must occur at least once
(?=.*[A-Z])     # an upper case letter must occur at least once
.{15,128}       # 15-128 character password must be a minimum of 15 characters and a maximum of 128
$               # end of the string

OWASP – Open Web Application Security Project

SQL Azure – Creating a numbers table using a Cartesian Product and then expanding dates between two field values for individual records

The following code uses a Common Table Expression to create a numbers table ,which is then used to expand all dates between two date values of an individual record. This structure could be used in a variety of circumstances where you are needing to charge costs to a product over a period of time or if you are needing to record consecutive dated values between two dates.

I am using Whisky Casks as an example here. Imagine a cask came in on a day and needs to be matured for a period and there will be a charge for every day that the cask is being stored to an account. Clearly going forward you might want to set up an overnight process to create the daily charges. But what happens where you wish to retrospectively create charges. Charge records will build up quickly and it will be difficult to manually keep on top of the required record correction.

Firstly I create a table that will hold a single record for each date a cask is in bond between the two tables. There are additional columns here as I need them for my use case (casks in bond).

SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

CREATE TABLE [dbo].[t066tempexpand](
	[pkid] [int] IDENTITY(1,1) NOT NULL,
	[chargedate] [datetime] NULL,
	[accountcode] [nvarchar](16) NULL,
	[pkidt002] [int] NULL,
	[pkidt053dailystoragecharge] [int] NULL,
	[locationwarehouse] [nvarchar](50) NULL,
 CONSTRAINT [PK_0000temp] PRIMARY KEY CLUSTERED 
(
	[pkid] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO

Next we create a CTE numbers table and use this to explode a single record for each dated day between the two dates in this case bondindate and a disgorge date. I have an extra filter here that filters what casks are being put into the table (here pkid>2000). Other filters could be where disgorgedate is not null. The following line will be used to identify the charge date.
dateadd(day,v.number,d.bondindate)
and
join Numbers v on v.number between 0 and datediff(day, bondindate, disgorgedate)
The first line shows the date that will be charged. The join is particularly elegant as it filters or restricts the records within the numbers table stored as a common table expression to multiple records of an individual cask based on all dates between two field values. The starting date identifies where to explode from and the second identifies the ending date (in this case the disgorge date) identifies where it must end. Another example if you were wanting to bring it up to todays date or the day before today , if you have an overnight process running at midnight getdate()-1.

The cross join in the common table expression definition is used to create a cartesian join on itself, from my investigations there is no particular significance to the sys.columns table used other than this is a system table that holds all columns in all tables in the database and therefore is likely to be somewhat large. It so happens in the database that I am using there are a lot of tables and a lot of columns so a single self cross join creates more than enough numbers than I will ever need. If you wish to create further records you can further cross join the sys.columns table on itself by repeating cross join line with subsequent aliases eg (cross join sys.columns c). After experimentation I discovered for instance if I left out the cross join there were only some 4,000 records in my numbers table. This is suspect reflects the fact that I have 4,000 columns in the tables of my database. In my case 4,000 X 4,000 or the cartesian product of the sys.columns table on itself results in 16 million records more than enough numbers for me. It should be noted that I am restricting the CTE numbers table to the top 20,000 anyway which in our case relates to over 54 years which is more back dating than I think I will ever need. It should be noted the original code I discovered had three cross joins presumably because the number of tables and so columns in the database was much smaller.

;WITH Numbers(number) AS (
  select top(20000) row_number() over (order by (select 0))-1
  from sys.columns a
  cross join sys.columns b
)

INSERT into t066tempexpand
(chargedate,
accountcode,
pkidt002,
pkidt053dailystoragecharge,
locationwarehouse)
select 
dateadd(day,v.number,d.bondindate) 
chargedate,
accountcode, 
pkid,
pkidt053dailystoragecharge,
locationwarehouse
  from t002caskfill d
  join Numbers v on v.number between 0 and datediff(day, bondindate, disgorgedate)
  where pkid > 2000

Here I am joining with an extra table to find out the daily storage charge and this is just a test.

select
a.chargedate,
a.accountcode,
a.pkidt002,
b.dailycharge,
a.pkidt053dailystoragecharge,
a.locationwarehouse,
b.chargecode
from dbo.t0000temp a
join dbo.v128dailycaskstoragelk023 b on a.pkidt053dailystoragecharge=b.pkid 
order by pkidt002,chargedate;

And lastly I insert all this information into a Charge history table.

INSERT INTO dbo.t0032chargehistory
(chargedate,
accountcode,
pkidt002cask,
chargedamount,
pkidt053chargecode,
locationwarehouse,
chargecode)
select
a.chargedate,
a.accountcode,
a.pkidt002,
b.dailycharge,
a.pkidt053dailystoragecharge,
a.locationwarehouse,
b.chargecode
from dbo.t066tempexpand a
join dbo.v128dailycaskstoragelk023 b on a.pkidt053dailystoragecharge=b.pkid 
order by pkidt002,chargedate;

Multi-tenancy – Design Options

Some personal notes on different options for managing multi-tenancy

Silo – Different databases for each tenant. Again PHPRunner has an excellent solution for this.

Siloed Database with one Front End PHPRunner

Bridging – Unique schema for each tenant – for me this I can’t see any great advantages to this and quite a lot of extra required work.

Row Level Security – Tenancy managed by the database – One database additional column in tables that record the tenant and prior to showing results of query the database will filter results by tenant. Policy set up with unique user for each tenant. Postgres applicable. Still requires an additional field in all the tables but it is nice to have and can be extremely quick to implement.

Client Software – Client side implemented tenancy – PHP Runner has an option to implement Row Level Security in the design of the front end.

Amazon Web Services article on Multi-tenancy

Some points – for postgres implementing Row Level Security on tables does not carry through to the views which is a major issue for me. There are way rounds it though. It would be great to have the database managing the row level security and if you have all your tenants in one database possibly the best way to do it is using the below.

How to Add Row Level Security to Views in PostgreSQL

Identify whether you are using MariaDB or MySQL using phpMyAdmin (through cPanel here)

When it comes to relational databases I used to think right there’s

    MySQL
    SQL Server
    PostGres
    Oracle

I know there is MariaDB as well and I know that is a direct substitute for MySQL I think I might be using it but you know what I’m not really sure. Whenever I look things up I quite often look for MySQL and to date any information obtained using those parameters have worked when executed against my databases through phpMyAdmin.

Should I be using MariaDB or hang on – am I???

Well turns out they are so similar that you might be using and not even know it. Turns out my hosting company uses MariaDB instead of MySQL but you might be under the impression you are still using MySQL. Why? Because if you are like me and you are using cPanel all the branding is still MySQL. I suspect this is actually very usual for anyone using cPanel.

Here’s my Database panel in cPanel

MySQL written all over the place even with a registered trademark symbol reinforced with the Dolphin MySQL logo.

Additionally phpMyAdmin never really mentions much about the database server you are using. And look it up and phpMyAdmin is often described as the MySQL client.

Am I using MySQL —— NOPE

Open up phpMyAdmin

Open the SQL panel and execute the following query.

SHOW VARIABLES LIKE "%Version%";

So despite all the logos and branding to the contrary I’m using MariaDB for this blog! Useful to know. I don’t know if it was MariaDB when I signed up in 2014 or not but this blog has been running continuously for 7 years and 7 months (at date of publishing this article) with no issues.

Kinsta on the differences between MariaDB and MySQL

I’m happy with being on MariaDB just because I am not mad about Oracle. Turns out I’m not alone in that position. Google / AliBaba and the European Investment Bank are all heavily invested in MariaDB which is probably why so many hosting companies actually run MariaDB in the background. There are big companies on MySQL but MariaDB is not going anywhere and many like me might have been running everything on MariaDB without even knowing it for years!

MariaDB on Wikipedia

I note in the Wiki article they state as a difference between MySQL and MariaDB Geographical Function support – not sure that part is true – I have asked Wikipedia to investigate , according to my research there is support via extensions although I have no personal experience of their use.

MYSQL / MariaDB – Useful SQL in both creating New Tables and Clarifying Default Parameters

Within an instance to identify the default character sets and collation in all databases.

SELECT * FROM INFORMATION_SCHEMA.SCHEMATA;

To alter the default character sets and collations for a database in a MySQL or MaraiDB instance.

ALTER DATABASE dbname CHARACTER SET utf8 COLLATE utf8_general_ci;

Creating Table – Template SQL

CREATE TABLE table1 
(col1 INT(10) NOT NULL AUTO_INCREMENT, 
col2 CHAR(30), 
PRIMARY KEY (col1))
ENGINE = INNODB;

And a good source of data types are here
Tech on the Net

There are unconfirmed reposts that queries against TEXT fields are generally 3 times slower than against VARCHAR fields.

Values in VARCHAR columns are variable-length strings. The length can be specified as a value from 0 to 255 before MySQL 5.0.3, and 0 to 65,535 in 5.0.3 and later versions. The effective maximum length of a VARCHAR in MySQL 5.0.3 and later is subject to the maximum row size (65,535 bytes, which is shared among all columns) and the character set used.

VARCHAR(MAX)

Is a thing

And if you want to change the engine on a table retrospectively. See OPERATIONS.

And a short note on your choice of collation

From Stack overflow here

Actually, you probably want to use utf8_unicode_ci or utf8_general_ci.

utf8_general_ci sorts by stripping away all accents and sorting as if it were ASCII
utf8_unicode_ci uses the Unicode sort order, so it sorts correctly in more languages
However, if you are only using this to store English text, these shouldn’t differ.

Note further down there is a person that states “be very aware of this problem that can occur using utf8_general_ci

“MySQL will not distinguish between some characters in select statements, if the utf8_general_ci collation is used. This can lead to very nasty bugs – especially for example, where usernames are involved. Depending on the implementation that uses the database tables, this problem could allow malicious users to create a username matching an administrator account.

This problem exposes itself at the very least in early 5.x versions – I’m not sure if this behaviour as changed later.”

And here is some code that allowed me to reset the auto increment index on a table.

ALTER TABLE my_table MODIFY COLUMN pkid INT(10) UNSIGNED;
COMMIT;
ALTER TABLE my_table MODIFY COLUMN pkid INT(10) UNSIGNED AUTO_INCREMENT;
COMMIT;

I have tested it and works with INNODB and MySQL

Another way to do it might be to drop the pkid table and then recreate it again

Using

ALTER TABLE mytable DROP pkid;
ALTER TABLE mytable ADD pkid INT NOT NULL AUTO_INCREMENT PRIMARY KEY FIRST;

Proof of Negative for database design : Put simply – Try to keep data in one easily searchable place!

Its always satisfying when you are able to conceptualise a problem that then seems to be appear widely. Invariably when I do this it is something that others reasoned on long before nonetheless it is still satisfying to arrive at the table albeit late.

I have done a lot of work in the public sector and many of the legal requirements constantly change and are not always logical. This can lead to contradictory requirements which software designers and particularly vendors can’t or won’t keep up with. Inevitably we end up with satellite procedures to catch edge cases. This is causing quite a lot of friction because of something I have started thinking about as Proof of Negative.

To prove a negative you have to access all information that has ever been – often to prove a positive you just need to find the one instance and then you can stop looking. Therefore it is on average harder to prove a negative than it is to prove a positive.

It was highlighted to me the other day when looking at a map. We were looking for a planning application and we were certain it existed. We had no reason to presume it didn’t exist however we were unable to find it. I eventually went to the map and looked at every planning application in an area none existed and so the conclusion was that it had never existed – time consuming but highlighted to me the importance of having all the information in one format that was easily comparable. Quite often switching between systems there are reconciliation issues either gaps or overlaps or the search options are wildly different which leaves you needing then to reconcile between searches additional difficulty and additional time.

So something to keep in mind when moving data about (Try and keep it in one easily searchable place ) Still nice to discover or should I say re-discover a fundamental truth .. It is sometimes referred to as the philosophic burden of proof and has been debated and thought about extensively.

Wikipedia Proof of Negative

018 Postgres : Export Data and Structure of a Single database from a Postgres instance / Import Data and Structure of a Single database into a Postgres Instance

Demonstration environment and programs
Windows 10
Postgres Version : 11.2
QGIS desktop version : 3.4.4

My working through of a process to export a single database (structure and data) from a Postgres Instance, the database has PostGIS and pgrouting extensions enabled, followed by importing into in this example the same instance but in principle could be a different instance.
Access the command prompt (RUN AS ADMINISTRATOR)

PLEASE NOTE run the command prompt as administrator or you will get frequently get an ACCESS DENIED message after using pg dump command.

Navigate to the directory of the PostgresVersion from which you wish to export the database. This will typically be the bin subdirectory of the version of your postgres ( here 11 ). You can ensure that pg_dump.exe is here if you do a dir on the directory to reveal alternatively you could reference the full path to pgdump and then pass the parameters to it subsequently.

eg

Next place in the parameters of the database what database you wish to export along with the name that you want to call the exported file and then hit return.

pg_dump -U postgres -p 5432 edinburghrouting > c:\dbexport.pgsql

Hitting return depending on the security of your instance you will be prompted for a password.

Enter the password hit return

When I do this on my home computer there is no return message but going into the C drive I can see that dbexport.pgsql now exists.

Next we want to create a blank database this is required to import the data and structure into.
This we do in psql signed in as a user with sufficient privelege.

Now back in the command line running as administrator we can run the following.

psql -U postgres importededinburghrouting < c:\dbexport.pgsql

Pressing return depending on your security you should be asked for your password.

Once this is done it goes through a process of recreating the structure of the database then importing all the data

For me the first lines look like this

and the last look like this

Now looking at the instance as a whole we can see the imported database

and here I am displaying geographical information through QGIS to get an idea of the data and ensure that it appears to be all correct.

    SUMMARY


There are quite a lot of tutorials online on how to do this but most seem to skip over some details - I've tried to be as accurate as possible but depending on you setup there may be differences. Nonetheless this is an extremely important task to perform so worth practicing to get right.

MS Access Function : Loop through tables and export to csv

A function that will loop through an access database and export all tables to csv and xls.

Useful for subsequent import through QGIS into Postgres.

Public Function ExportAll()
Dim obj As AccessObject, dbs As Object
Dim strFolder As String
strFolder = "c:\"
Set dbs = Application.CurrentData
For Each obj In dbs.AllTables
If Left(obj.Name, 4) <> "MSys" Then
DoCmd.TransferText acExportDelim, , obj.Name, strFolder & obj.Name & ".csv", True
DoCmd.TransferSpreadsheet acExport, acSpreadsheetTypeExcel9, obj.Name, strFolder & obj.Name & ".xls", True
End If
Next obj
End Function

SQL SERVER – Create TSQL Trigger to add date to existing record after INSERT or UPDATE

I came across an interesting issue with a web application I have been trying to create. DEFAULTS in SQL Server didn’t seem to be registering when new records were inserted through the web application.

How come? Surely the web application is not bypassing requirements set out by the database.

What I believe is happening is that the web application is explicitly supplying a null value to the dateupdated field and because of this the default value is NOT applied by SQL Server. You are therefore in the strange position of creating a trigger on a column after insert. Normally you would never really do this as that’s specifically what default is for.

Here’s code to create the trigger on the SQL Server that will do just this.

CREATE TRIGGER TR_moddate
On T0001PersonData
AFTER INSERT, UPDATE
AS
BEGIN
UPDATE T0001PersonData SET DATEUPDATED=GETDATE()
WHERE PKID IN (SELECT PKID FROM INSERTED)
END

Some explanation – This updates the field DateUpdated within the table T0001PersonData after an update or an insert. If you have default on a column this may mean that it updates to default then to null then to trigger so you may wish to remove the default on the column. Alternatively you may wish to get the front end to update to the current date. I guess this could have performance issues at scale but my application is pretty small beer so I am happy with this at the moment. I think I prefer to have it at database level as well. It should be noted that INSERTED is actually a ALIAS table created by SQL Server and held in memory that allows for reference to pull out items before they are registered in the database. Leaving out the where clause may have resulted in the application updating the dateupdated column for all records anytime an update or insert was performed. BE WARNED

MS Access – VBA Functions – Create Category Tag Junction Table by comparing a text field against a table of categories

Going forward there are more and more systems that have somewhat un-formated text or memo fields. It can be useful to tag fields. Here’s a collection of 2 functions with a script to pull them together designed to create a junction table.

What’s nice about it is that it could be used in lots of situations as a nightly process to tag manually input notes to help assist users navigate screeds of text.

This code is generalized and would need to be adapted for your specific table and field names

In mine you will need 4 tables
T001TableContainingFieldtobeCatetgorized – as per title it has a field called PKID and a field Called Text which is the memo field against which the SQL compares categories
T002Category – table that contains the categories that are compared against the text field
T003JunctionTable – the junction table that will contain the links between our notes table and the category table.
T004SQL – table to contain update queries – the field storing the strings is SQLstring – RunQueriesFromTable uses the SQLstring query and places the result in T003JunctionTable

The function RunQueriesFromTable is a previous function I wrote

Function CategorizeField()

'Make sure the name of the recordset is unambiguous
'Good practice to reference the actual library

Dim db As DAO.Database
Dim rs1 As DAO.Recordset
Dim rs2 As DAO.Recordset
Dim rs3 As DAO.Recordset
Dim SQLUpJunc As String
strQuote = Chr$(34)

Set db = CurrentDb
Set rs1 = db.OpenRecordset("SELECT * FROM T001TableContainingFieldtobeCategorized")
Set rs2 = db.OpenRecordset("T004SQL")
Set rs3 = db.OpenRecordset("T002Category")


'the data source can be a Table Name a query name or an sql string
'it would be possible to change the SQL to set to another set of records
'Check to see if there are any records in the set

If Not (rs3.EOF And rs3.BOF) Then
'there are no records if End of File and beginning of file are both true

rs3.MoveFirst

Do Until rs3.EOF = True

SQLUpJunc = "INSERT INTO T003JunctionTable ( FKIDT001, FKIDT002 ) SELECT T001TableContainingFieldtobeCategorized.PKID, " & rs3!PKID & " AS FKIDT002 FROM T001TableContainingFieldtobeCategorized WHERE (((T001TableContainingFieldtobeCategorized.Text) Like " & strQuote & "*" & rs3!Category & "*" & strQuote & "));"

With rs2
.AddNew
rs2!SQLstring = SQLUpJunc
rs2.Update
End With

rs3.MoveNext
Loop
Else
'MsgBox "No Records available for updating exit sub"
Exit Function
End If
'MsgBox "Looped through the records and updated table of SQL"

rs2.Close
Set rs1 = Nothing
Set rs2 = Nothing
Set rs3 = Nothing
Set db = Nothing


'remember to break an infinite loop press ctrl + break

End Function
Public Function RunQueriesFromTable(SQLSource As String)

DoCmd.SetWarnings False

Dim StartTime As Date
Dim EndTime As Date
Dim rstZ As DAO.Recordset
Dim strSQL As String

StartTime = Now()

Set rstZ = CurrentDb.OpenRecordset(SQLSource)

Do Until rstZ.EOF

strSQL = rstZ!SQLstring
DoCmd.RunSQL strSQL
rstZ.MoveNext

Loop

DoCmd.SetWarnings True

EndTime = Now()

'MsgBox "Finished ALL SQL queries! Process started at " & StartTime & " and finished at " & EndTime

End Function
Public Function CreateJunctionTable()

Call CategorizeField
Call RunQueriesFromTable("T004SQL")

MsgBox "Finished"



End Function

MS Access like development environments for the Web – 3 alternatives

So you would like to construct simple applications that you can at the moment create in MS Access but you want to do it on the web. By that I mean you would like to create a data driven application with somewhat complicated forms that can be accessed by anyone through either IE or Chrome anywhere in the world with a simple login screen at the front to prevent simply anyone accessing the applications collecting the information into a database. What are your options for programs that will assist you in a MS Access like environment rather than going the full IDE deep dive – Visual Studio route – for what I consider to be a reasonable fee?

From my experience the unicorn of access on the web is slowly coming to fruition BUT for the vast majority of people with a budget similar to that for MS Access – lets say £200 ($250) a year for unlimited applications there is simply nothing which is quite as easy and powerful as MS Access. Some are pretty close but simply not as stable and require typically several magnitudes greater amount of configuration. WYSIWYG design isn’t quite as WYSIWYG and stability is a few orders lower than the desktop.

What you are probably looking at can typically be described as either RAD tools for the Web, a Low Coding Platform or something called a Code Generator any of those phrases can be useful for Google.

Assuming you don’t have your own servers whatever you do you will need to spend money on a web host.

The minimum this is likely to cost you is in the region of $15 a month. If you don’t want to spend the next 6 months learning about the insides and outsides of frameworks then I would suggest you go to one of the below three providers who all provide complete environments set up and ready to go with their particular generators pre-installed. This is good value it is extremely difficult to beat these guys on cloud hosting costs and unless you are very advanced and have very particular requirements its a waste of time to try. All three of the below providers will allow you to create limitless number of applications albeit you are limited by the space you hire on your web server. Similarly distribution will be limited by the quality of web server you sign up for. In all likelihood if you have few users it is unlikely that the coding front ends of your applications will be a limit to the number you create more likely the size of databases you are attaching them to and the shear time you have available to create applications.

For a period I was paying a monthly amount for a Nubuilder Pro hosted platform. This performed well and I could create an unlimited number of applications. As it was so hosted I skipped the step of learning some of the deeper parts of the initial configuration. I hope at some point to go back to this. It is open source and seems well maintained with a very dedicated developer. The developer re-wrote much of it and at March 2019 it latest re-incarnation is Nubuilder Forte.

Be warned n-tier web applications do not play as friendly as the desktop you WILL be slower to construct applications than you are on the desktop, getting into it WILL take time and a bit of commitment, you WILL have far less flexibility regards coding, there WILL be less people about to ask questions and there is far far less WYSIWYG design capabilities, error trapping is poor and errors are far more likely to be fatal and the really big warning is that on release of new web frameworks you may not necessarily be able to update without a full site re-design (A fact that comes as a nasty surprise to many CIOs and Project Managers when they realise that they are locked into front end system replacements every 4 or 5 years ) Know how to get data to your local environment out of the back end and accept that the front end is ephemeral and not likely to last in the same way as your desktop applications. (Your database will last but don’t expect to be running it through the same front end ten years from now). Accept that you will now have monthly or annual rental fees for cloud provision.

That said the design of these items is significantly faster than its ever been.

Scriptcase and ASP Runner dot net (Xlinesoft also produces a PHP equivalent generator) have free downloads that are good for getting started.

Commit to one and go for it. – I’ve got both PHP and ASP.NET solutions.. Nubuilder only connects to MySQL whereas Scriptcase and ASPRunner.NET connect to pretty much any database. I started with Nubuilder and am using ASPRunner.net as well because importantly it connects to SQL Server and it was easy to get up and running in MS Azure. Scriptcase is php based and I believe the applications it build require some kind of security program to sit on the web server this put me off – they do however have hosting that you can sign up for which is pre-configured. Their IDE is web based which could be a winning advantage. One of the great advantages of ASP runner dot net is that the program produces an open web application that should run on all stock servers. I found Nubuilder Pro (now Nubuilder Forte) really conceptually elegant which despite its rather drab looks is incredibly flexible the applications it produces are however limited to MySQL and non responsive (But being non responsive you get get more detailed forms!). I would probably be able to change it’s look if I was prepared to get my own server and install everything on it myself. That is not something I have time to do at present.
Nubuilder hosts its IDE in the browser which again is an advantage. ASPRunner.net is more traditional in that you have a program running on a desktop that creates the plumbing of your application which you then need to push to a server for publication  this has the advantage that you get to see the plumbing in the background which makes backup of the site easier but publishing slightly harder.

You may have heard of other generators / design applications out there for example – Zoho Creator / Alpha 5 / Outsystems these hold your hand even more but as a result are even more proprietary and won’t fit in that budget of £200 per year ( by quite a long way!)

Some further information on costs – nubuilder being open source in theory could scale for very little money espectially if you have your own servers already. Scriptcase and Xlinesoft ASP Runner product have an initial fee followed by annual subscription – you may be able to configure it so that you can create unlimited applications for that one fee (if you have good access to web servers ) but it is likely that initially there will be some kind of variable cost per additional application you wish to build. I am using MS Azure with ASP Runner dot net and a developer database costs me about £5 a month with each application being hosted in a web application service which again costs £5. With both Scriptcase and ASP Runner if you stop paying the annual fees your applications will continue to work you will just not get version upgrades. You will be able to step back into version upgrades but you may need to restart your subscription with an additional fee.

Nubuilder Forte Link

Scriptcase Link

ASP Runner – PHPRunner and ASPRunner.Net Link

Good luck

nuBuilderPro – a database driven Web Framework for simple Form Design

I have been seeking a way of designing simple web forms for a reasonable cost for some time. I was continually coming across either very professional products with really quite large annual charges (alpha anywhere) or alternatively products that charge a per seat cost for users (zoho creator). Fine I don’t mind paying for software I really don’t but what about a project that only brings in $2000 dollars a year how can I justify Alpha Anywhere’s costs for that or how can I give access to individuals who will need an application very rarely but for whom there is real value in knowing this information. I have long believed that software costs should be coming down in cost rather than ever increasing and getting users interested in your applications is often impossibly hard at the best of times when there is no cost let alone when you tell them that they need to shell out for a subscription. What happens to the user who only needs something once in a blue moon or a user not from within your department within a section where you have no control of the budget? I’ve recently had a lot of success with open source software and as a result I have been scouring the internet for options for some years. Recently I found a project called nuBuilderPro from an Australian based development house called nuSoftware. nuBuilderPro is version 3 of what I think is their own framework. It looked good and what’s more although based on the LAMP stack which is something I’ve had very little dealings with they offered a fully hosted VPS service where I could experiment and get my bearings. They completely host the development environment for a very very reasonable price. Given that I didn’t know whether it was going to be useful to me yet this seemed an ideal opportunity to experiment. It has only been a short time since sign up but I can already see that it will prove useful. I have signed up for the most basic of server accounts and at 17th of January created 3 very basic applications. All in about 3 hours of work. Thats what I like to see proper RAD development.

The website is here.

www.nubuilder.net

And the options for hosting are here

www.nubuilder.net/hosting.php

Importantly you can start small get to know the product and work up – additional users do NOT cost you extra money, login security comes built in and you are free to design as many applications as you can fit within your VPS. Given that when I first signed up I didn’t have any users or applications and didn’t know how to design applications, this seemed completely ideal.

So far it looks like I’m going to use it for small but important applications that need very quick development that are very distributed. That’s not to say it couldn’t be used for much larger applications but I need to run before I can walk. Given the underlying database engine and stack it is likely that it will be able to handle much bigger projects than I am likely to throw at it. Although Open Source all but the top of the stack has been in solid development for many years I guess the question mark is over the control management framework for the database integration and form building UIs which is somewhat new and untested. I would very likely have gone with something like Lightswitch but Microsoft have been giving a distinct lack of commitment to any one RAD web tool recently. Certainly it is true that very few individuals seem to know about it and the forum is somewhat quiet compared with some open source projects. Importantly though I haven’t come across any other open source projects with the price model that they have and importantly I can easily get any information in and out so I don’t believe I am taking any risk. I hope to be investigating it further over the coming months. From what I have seen so far I am very impressed and did I say it had a unique price model anyway watch this space!

SQL Saturday #388 Edinburgh

SQLSAT388_web
Went along to SQL Saturday BI Edition on 13th of June.
Honestly I thought it was really great. Many thanks to Jenny Stirrup for being the event organiser. If you are thinking about going next year you really should.

This was the schedule for SQL Saturday in Edinburgh

Sessions I attended
Key Note with Jon Woodward – on long term future of computing in general
Web Scraping with Python – Iain Elder of Sky Scanner
Better Data Visualisation for CRM and ERP – Adam Vero
The European Economic Crisis and the Euro-A Data Tale – Carmel Gunn
Blue Sky Thinking : SQL Azure Geospatial Mashup – Thomas Sykes
Master Data Management – Dave Lawrence

All the talks were really thought provoking and nice to hear from people who are really at the top of the game. I have already started experimenting with Web Scraping and Python.

Setting up a Blank SQL Server Spatially enabled Table using Microsoft SQL Server Management Studio 2008R2 Express and displaying it in QGIS 2.8.1

Programs used;

1- SQL Server 2008R2 Express
2- SQL Server Management Studio 2008R2 Express
3- QGIS

The example uses UK national grids coordinates to create a Triangle Polygon in a SQL Server Table

I’ve previously written that while we’ve had spatially enabled SQL Server for over 5 years I constantly come across line of business applications that although using SQL Server have not and do not intend to spatially enable the application. This is undoubtedly because of the difficulty in re-designing legacy systems actively in use and because the benefits although significant are not generally requested by all but the most knowledgable of colleagues.

While I understand this legacy system reasoning spatially enabled databases are the future so its just a matter of when and not if an application will require alteration. Understanding it in this context makes it really a requirement to start seriously planning for its inclusion.

Developerers creating new applications however should always consider spatially enabling relevant tables from the start even if it is not specked by the client/colleague. It being so much easier to spend a couple of minutes future proofing the schema of a new born database rather than hours trying to retrofit a live in production back end.

Firstly it’s important to understand what a geodatabase in SQL Server actually is.
Really it is a normal database which has one table that has a field that has a geometry or geography value type. In this example I will use desktop QGIS 2.8.1 to display the resulting geometry but any other digital mapping package that can link to SQL Server could be used. SQL Server also has a very rudimentary Mapping Display but you will need something better if you want to manipulate boundaries visually.

Many digital mapping products have plugins that will create Geodatabases and tables however I haven’t seen one for QGIS. I really wanted to be able to create spatial SQL tables on my own without recourse to paid tools directly in SQL Server Management Studio. So here’s my method of creating blank polygon table whose geometry is ready to be read and edited in QGIS or any other digital mapping system just using SQL Server Management Studio Express 08R2.

Steps
1. Create a new Table
2. Ensure the table has an identity Key that increments
3. Create a geometry column
4. Write a query that updates the geometry column

UPDATE T001Sites SET Coordinates=geometry::STGeomFromText(‘POLYGON((301804 675762,295789 663732,309581 664870,301804 675762))’,27700)

You will note that there are four coordinates here (each coordinate being a pair of numbers )
The first coordinate and last are the same this is required by SQL to ensure that the polygon is closed.

The 27700 number is the Spatial Reference System Identifier (SRID) – it is a unique value used to unambiguosly identify projecttion. 27700 is the unique identifier for the United Kingdom National Grid and represents the coordinates my example refer to. The spatial reference identification system is defined by the European Petroleum Survey Group (EPSG) standard which is a set of standards developmed for cartography surveying etc and owned by the Oil and Gas Producers Group list here; http://www.epsg-registry.org/

The above coordinates display a triangle in West Lothian near Edinburgh

5. Set up the connection to SQL Server Instance

Ensure the box marked “Only look in the geometry_columns metadata table” checkbox is unchecked. By default this is checked and if the geometry_columns table does not exist you will get an error message.

QGIS-SSMS-Connection

6. Display the table and edit as appropriate.

Select the table then hit the Add button

QGIS-SSMS-TableReadyforDisplay

And here is the SQL Server table in QGIS ready to be added to or edited.
QGISshowingSQLServerPolygon

Database Normalisation – a review of the rules to refresh my knowledge

Here is a review of the 6 forms of normalisation (yes the 4th is called 3.5 for some reason) – there are lots of summaries on the web of these, but I wanted to re-write them in my own words to ensure I understand them fully. Revision of these post real world experience is particularly enlightening.

Normalisation rules are cumulative ie a database or set of tables can be said to be 1 through to 5 compliant. To rank at the highest level it must comply with all previous forms.

That is to say that a database (or table) to be in the 2nd Normal Form must first fulfill all the criteria of the 1st Normal Form.

A database is said to be properly normalised if the first three normalised forms are inviolate.

Violations of normalisation forms subsequent to the third normal form may be legitimately required and as such are NOT absolutely required for a database to be classified as properly normalised.

Some web explanations do not list forms post 3 I would argue there is still a lot of value in trying to adhere to these less well forms but must admit to not always including them in design usually due to poor knowledge of the application domain.

The differing forms of normalisation start simple becoming increasingly complicated.
If you have a lot of experience in creating relational backends you may have deduced these issues for yourself.

(1NF) First Normal Form – Edgar F Codd – first stated 1970
Eliminate duplicate columns from the same table.
Create separate tables for each group of related data and identify each row with a unique column or set of columns (the primary key).

(2NF) Second Normal Form – Edgar F Codd – first stated 1971
No partial dependency on anything but the key fields

(3NF) Third Normal Form – Edgar F Codd – first stated 1971
Every non prime attribute of the primary key is non transitively dependent on every key of R. The example I saw was that if you had winners of wimbledon some of which repeated you would not store there birth dates. You would have a junction table of the winners and link to the names of the individuals and it would be in this table that you would store the dates of birth.

The 3.5 Normal Form or the Boyce Codd Normal Form
This is a slightly stronger version of the third normal form developed in 1974 by Raymond F. Boyce and Edgar F. Codd to address certain types of anomaly not dealt with by the 3rd Normal form as originally defined.

Only in rare cases does a 3rd Normal Form table not meet the requirements of Boyd Codd Normal Form. A 3rd Normal Form table which does not have multiple overlapping candidate keys is guaranteed to be in Boyce Codd Normal Form.

Web investigation indicates that the most concise description is;
Every determinant must be a candidate key

and that Boyd Codd Normalised Form is not always possible

Nearest Shops
Person Shop Type Nearest Shop
Mark Jewellers H Samuel
Mark Barbers Mr Man
Natalia Coffee Shop Starbucks
John Coffee Shop Costa
John Barbers Mr Man
John Jewellers H Samuel

The issue above is that Mr Man on record 2 could be changed to a Hairdressers for shop type but the second last record John would not change and as such it would seem to be inconsistent. In this case I would split this table into two with Nearest Shop and Shop Type being linked and there being some kind of table that links People to Nearest shops.

The 4th Normal Form
A table is in the 4th Normal Form if it has no multi-valued dependencies.
Definition: Multivalued dependencies occur when the presence of one or more rows in a table implies the presence of one or more other rows in that same table.

For example lets say you have a you have a shop selling coffee – lets call it Starbucks and you have a table that lists the permutations of cofee sold in its store
CoffeeShop Coffee Product Location
Starbucks Café Late Livingston
Starbucks Americano Livingston
Starbucks Café Late Edinburgh
Starbucks Americano Edinburgh

If we know that all Starbucks sell the same products if there is a new product in one shop then we would expect that new product to be in the other shops so adding a Skinny Late to the Livingston shop implies that there should be another record for Skinny Lates in the Edinburgh shop.

To eliminate the possibility of anomalies we must place the facts about the products and the location of the shops into two tables

Products
Starbucks Café Late
Starbucks Americano
Starbucks Skinny Late

Location of Coffee shops
Starbucks Livingston
Starbucks Edinburgh

Alternatively if the coffee types did vary from location to location the original table would satisfy 4NF

The 5th Normal Form
A table is in fifth normal form or Project-Join Normal Form if it is in the Fourth Normal Form and it cannot have a loss-less decomposition into any number of smaller tables.

Or as I like to call it the – I know about it and try to adhere to it but sometimes I’m a bit lazy form.

I certainly try to keep my designs to the full six normalisation principles but there are instances where certain tables do not comply.

Save Record before Event

If users are editing or creating a record and there is an option on the form to print out the form if they press it they will expect the information that they have just created to appear on the print preview. Normally MS Access does not save the information to the database until the record is exited or specifically instructed. This can lead to user confusion when they enter a record hit print preview expecting to see the record and see a blank preview.

Placing the following code before calling the print preview will ensure that the record is saved prior to the print preview being triggered resulting correct information being displayed in the print easy. An easy fix – you should always do the easy wins.!

Note order is important, put this before the event you are wishing to trigger (quite ubiquitous in my code)

DoCmd.DoMenuItem acFormBar, acRecordsMenu, acSaveRecord, , acMenuVer70

Its the constant small touches that make great applications