PHPRunner – Using Sweet Alerts to give Users confirmation of Save

In PHPRunner as of version 10.51 the sweet alert javascript library is included in PHPRunner generated web applications.

How can we add code to a project to make bespoke adjustments?

Firstly navigate to the form you wish to add a special sweet alert to and insert a Custom Button.

Next navigate to the Events tab in PHPRunner and expand the table or view to which you added the additional button.

Behind the Javascipt OnLoad event add the following code

$('a[id^="saveButton"]').hide(); // Hide button "Save"

And on the Client Before Event of the new button add the following code

Swal.fire({
icon: "success",
title: "Saved",
showConfirmButton: false,
timer: 1000
});
$('a[id^="saveButton"]').click();
return false;

So what does the code do
Javascript OnLoad Event – Hides the real save button
On Click – Triggers the sweet alert success routine and once that is complete triggers the hidden savebutton code.

And more examples and inspiration

Sweet alert 2

And also if you find the Sweet Alert modal size too small add this to your page css:

.swal2-popup { font-size: 1.6rem !important; }

Most of this was from the following Xlinesoft User forum Thread
Thread

MS SQL Azure – Computed Column – Create UTC Datetime value from Datetime & Timezone fields – Daylight Saving

This is not tested at time of writing obtained from ChatGPT 4

Answer
Sure, here’s an example of how you might implement the ConvertToUTC function to handle time zones like ‘London/Europe’. This example uses the AT TIME ZONE clause, which is available in SQL Server 2016 and later:

CREATE FUNCTION dbo.ConvertToUTC (@localDateTime DATETIME, @timeZone NVARCHAR(50))
RETURNS DATETIME
AS
BEGIN
    -- Convert the local datetime to a datetimeoffset value using the specified time zone
    DECLARE @datetimeoffset DATETIMEOFFSET = @localDateTime AT TIME ZONE @timeZone
    -- Convert the datetimeoffset value to UTC
    DECLARE @utcDateTime DATETIME = CONVERT(DATETIME, SWITCHOFFSET(@datetimeoffset, 0))
    RETURN @utcDateTime
END

You can then use this function in a computed column definition:

ALTER TABLE YourTable
ADD UtcDateTime AS dbo.ConvertToUTC(field01, field02)

In this example, field02 should contain time zone names that are valid for the AT TIME ZONE clause. For example, ‘UTC’, ‘Pacific Standard Time’, ‘Central European Standard Time’, etc. For London, you would use ‘GMT Standard Time’.

Please note that this is just a basic example. The actual implementation would depend on how you’re storing time zone information and the specific requirements of your application. It’s also worth noting that handling date and time conversions can be complex due to factors like daylight saving time, so it’s important to thoroughly test your solution to ensure it handles all edge cases correctly.

If you’re frequently performing these types of conversions, you might want to consider storing datetime values in UTC directly in your database. This can simplify your data model and avoid potential issues with time zone conversions.

Question
Does AT TIME ZONE have a default value that will alter depending on whether the UK is under British Summer time?

Answer
Yes, the AT TIME ZONE clause in SQL Server does take daylight saving time into account. When you use a time zone name with this clause, it will automatically adjust the time based on the current daylight saving time rules for that time zone.

For example, if you use ‘GMT Standard Time’ (which is the time zone name for UK time in SQL Server), it will automatically show the correct offset for daylight saving when it is in effect.

Here’s an example:

SELECT GETDATE() AT TIME ZONE 'GMT Standard Time'

This will return the current date and time on the database BUT in my testing does NOT adjust the time to the timezone but will show the correct time zone OFFSET for the date for corrected for daylight saving i.e Greenwich Mean Time (GMT) or British Summer Time (BST), depending on the date.

Please note that the actual time zone names and their daylight saving time rules are based on the time zone data on the server where SQL Server is running. This data is updated through the Windows operating system, so it’s important to keep your servers up-to-date to ensure that the time zone rules are accurate.

And here are what I think are the supported time zones
MS Azure supported time zone values

Post-script
A lot of online editors won’t let you run the At Time Zone function but this one does..

dofactory

Comparing the time to my location and the time where I am I can tell that this is Pacific Standard Time – i.e Western America – California perhaps.

We also note that the time isn’t adjusted to the stated timezone but we do see the offset (see +01:00 in record return) Again this is because per se it doesn’t tell you what the server is set to but by using the sysdatetimeoffset we can correct the server time back to UTC and the adjust for timezone see second code example. The Timezone although included in the time is again be stated.

SELECT SYSDATETIMEOFFSET() AT TIME ZONE 'W. Europe Standard Time' as WEuropeTime, 
SYSDATETIMEOFFSET() AT TIME ZONE 'GMT Standard Time' as GMTStandard

And so there is a direction to timezone switch – In the above we have mainly been switching from UTC to a timezone but the below switches from a timezone to UTC which is what we will need if we are storing the input as a datetime and a separate timezone for each record.

SELECT SWITCHOFFSET(GetDate() AT TIME ZONE 'Pacific Standard Time', '+00:00')

Remember though timezones are held outside SQL Server databases on the server and as such are non deterministic. This is a good demonstration of determinism in practice

Deterministic algorithms are entirely predictable and always produce the same output for the same input.

Non-deterministic algorithms may produce different outputs for the same input due to random events or other factors.

Databases and Time – Time Zones are not Datatypes

When working with time in databases and web applications we should recognise that time zones are NOT a datatype. They represent an important part of a datetime value that links to a datetime value and may represent the time on the server or the time in the browser or some time in a particular country for an attribute of a record. Simply viewing a datetime value individually is incomplete if users are entering data from all over the world. If this is not designed into the schema from the beginning problems may be presented when filtering and ordering records based on datetime.

Does your application have users within the same or different time zones? Is your database hosted in the same or a different time zone?

SQL Azure database servers for instance only allow for a default time zone set to UTC. This means wherever you are in the world if you set a default value on an datetime field new records will record time at UTC. This can present issues where users submit values based on a different localised time.

Default values that are set to GetDate() can be out by a given time offset. Additionally web interfaces which set the datetime value at input may be set by the browser and if the browser is not on UTC that time will use a clock that may be forward or behind UTC. What’s the solution? My suggestion is to record the time zone in a separate field per record to the datetime value and then use a view to convert all datetimes to a common time zone field (probably best UTC). At least that should reduce complexity for users. Developer and users should be aware as this could be confusing when a user inputs a time into a database and then is presented with the UTC value when viewing the value.

It should be noted that Azure Logic Apps are on a server that is not set to UTC but local time (in my case London / Europe) I suspect that whatever local you set will be affect the local time. This led to a situation where in British Summer Time a logic app procedure triggered for 10pm was showing 9pm in the database – the BST offset.

Repeat after me – Time Zones are not Datatypes

And here is a nice site that identifies differences from UTC of timezones based on URL parameters

Time around the world
British Summer Time – 10am UTC

Eastern Seaboard Time – 10am UTC

Sydney Australia – Aussie Eastern Daylight Time – 10am UTC

WordPress and OceanWP(theme) – Default config for blank Blog

WordPress is excellent but setup through the admin menu system seems to me unintuitive and a standard configuration can be really difficult to replicate. I constantly wrestle to get sites how I like them. I resolved to document a standard configuration with WordPress and the theme OceanWP (free version) to speed things up.

What are my preferences on good design? A wide section for text and obvious and intuitive search option with menu that can be setup dynamically to add new posts to menu under their respective category without continually needing to alter the Menu design.

It should be noted that I don’t use the OceanWP theme for this site – which is a historical thing.

I’ve used OceanWP theme (Free version) for some time its a good free popular WordPress theme.
You can read more about Oceanwp here
My shared hosting provider is InMotion I have found them to be excellent particularly their online support which is 24/7 link here

Firstly install a default WordPress website. My provider uses softalicious through cPanel your preferred web management hosting platform may vary. I won’t document this stage

Next install the OceanWP Theme and add Plugins.

Next I add my preferred plugins, some of which come with shared hosting WordPress installation;

  • Akismet (default with my shared hosting wordpress installer (inmotion))
  • Broken Link Checker
  • Jetpack (again default with my shared hosting wordpress installer (inmotion))
  • Login logo
  • Nginx Helper (default)
  • Password Protected (if enabled will require a password for the entire site)
  • Post Types Order
  • Total Upkeep (I use premium as it allows for improved backups)
  • Current Year, Symbols and IP Shortcode (allows addition of copyright symbol with dynamic year)
  • .

    Next Default configuration of OceanWP WordPress theme using the above plugins

    Settings / General
    – alter site title and tagline

    Settings / Reading / Search engine visibility
    Set as appropriate either tick yes to encourage or clear to discourage
    – Discourage engine visibility
    This is also where you show blog posts as either Full Text or just an Excerpt. If you are on Full text there is little point in having the Read more element in the blog element positioning section.

    Settings / Permalinks
    Apparently this helps with SEO
    – Postname

    Settings/ Discussion

    Appearance/Customize/Widgets/Default Sidebar
    This is where you can add elements and alter the order of elements in the sidebar. I find the admin design clunky and unintuitive but I eventually discovered how to do it, use the edit icon and three vertical buttons to add and subtract elements (search is not added and removed from the sidebar here for example).

    Appearance / Menus
    I set up a menu called Primary Menu and add Home and Index to it as follows. Note that Home is a Custom Link and Index is a Category Link. Note if Home is not available you can create it and then add it.. the url link will just be the default domain.

    Appearance / Widgets

    You can highlight recent comments and delete the entire block – I also add the category list to the side bar

    Appearance / Customise / General Options
    – Layout Style – Separate

    Appearance / Customise / Top Bar / General
    Turn off Top bar by ensuring the ENABLE TOP BAR tick box is blank

    Appearance / Customise / Menus
    Here I am ensuring the menus are not repeated. In OceanWP there are four places where you can place menus. I turn off Top Bar and I don’t like the Menu being in the footer so I leave both at the default blank -Select-

    Appearance / Customise / Blog
    We are given two options here – Blog which is the overall and Single Posts. I will normally remove the sidepanel for the single posts and I am still considering whether I should remove the side panel for the full list of posts, either way you do both here.

    Appearance / Customise / Header / General
    I haven’t experimented with all the different options but one that seems to work is Minimal so these are the settings

    Appearance / Customise / Header / Menu
    Again there are a lot of options here and not all seem to work or seem to be intuitive so I have settled on the following

    Appearance / Customise / Header / Mobile Menu
    Again a lot of options but here are a set that I am happy with and work. It should be noted see the drop down Elements Positioning it likes to have the Logo at the front otherwise the magnifying glass can disappear. Note if you change it there is sometime lag on display this might be related to your hosting provider and caching (not sure)

    Appearance / Customise / General Options / Theme Icons
    I like to set to Font Awesome

    Appearance / Customise / Footer Bottom
    A nice touch to ensure that copyright is Set – here we use the shortcode from the plugin

    Appearance / Customise / General Options / Scroll to Top
    If you haven’t installed a Sticky Header this can be a reasonable alternative OceanWP specific

    Next lets make sure the search box is above content on mobile devices I like this as it makes it really obvious to users on mobiles about how to use the search facility. The search facility to my mind being one of the most important things in a wordpress site. I also like infinite scroll.

    I’ve pretty much documented everything here – I haven’t touched the logo which is how you get the icon on a site which is also something you are going to do. I will come back to this post and add that when I am doing that in anger for a demonstration site. Remember if you are reading this that these options are relevant to WordPress with the Free theme of OceanWP installed at October 2023.

    Add POSTGIS extension to Azure Postgres Database – Worked Example

    Firstly we need to allow for the extension on the server.
    Within the Azure portal navigate to server parameters

    Search for azure.extensions

    Select which extensions to add
    REMEMBER to save before exit!

    Wait a short period while the extension is loaded onto the server..

    Some points
    Azure Database for PostgreSQL supports a subset of key PostgreSQL extensions as listed HERE. This information is also available by running SHOW azure.extensions;. Extensions not listed in this document aren’t supported on Azure Database for PostgreSQL – Flexible Server. You can’t create or load your own extension in Azure Database for PostgreSQL.

    After extensions are allow-listed and loaded, these must be installed in your database before you can use them. To install a particular extension, you should run the CREATE EXTENSION command. This command loads the packaged objects into your database.

    The extension(s) should now be listed against your database.

    Postgres on Azure : also named Azure Database for Postgres Flexible Server : MS Azure

    Quick Start Azure Documentation

    Good link on tips and tricks Azure

    Login to your azure portal and choose Create a resource

    Next select the Azure Database for PostgresSQL Flexible Server

    Now indicate the starting parameters
    Please note IF you want to connect using PGAdmin I found issues with version 15 so chose version 14 it is not clear why this is some quick googling suggested it was the version of PGAdmin I was using. So at the bottom of this post I capture the about PgAdmin 4 that I am using as a client. I was able to connect to the server vis psql.
    Remember all of your parameters and choose an appropriate account

    Note you can stop the server if this is a development test copy which is good for reducing costs…

    Do this by navigating to the server and selecting the stop button at the top.

    Note give it time to start and stop not like me try and start the server before it has finished stopping!

    MS Azure : Identify outbound IP addresses from an MS Azure Web App Service

    When setting up an Web App Service on Azure which you are wanting to hook up to an Azure Database you will need to ensure that the IP address of the Web App Service is white listed for the Database Server. This is true of Postgres and SQL Azure in my experience. Now when you upload your project to the Web App Service and connect to an SQL Azure the error message indicates what IP address of the web application is failing. That is not the case when trying to connect to Azure Database for Postgres Flexible Server (my experience at September 2023) so how do we find this. As usual the answer is in Azure documentation but thoroughly buried.

    One paragraph states

    Regardless of the number of scaled-out instances each has a set number of outbound IP addresses at any given time. Any outbound outbound connection from the App Service app, such as to a back end database, uses one of the outbound IP addresses as the origin IP address. The IP address is selected randomly at runtime,

    SO YOUR BACK END SERVICE MUST OPEN ITS FIREWALL TO ALL THE OUTBOUND IP ADDRESSES FOR YOUR APP

    So how do we find these outbound IP addresses?
    Log into the Azure Portal using the administrator account that has access to the Web App Service you want to identify outbound IP Addresses for.

    Select the web app service

    Look to Networking within Settings menu

    Now several boxes should be presented to you towards the right of your browser window identify the one marked Outbound Traffic.

    IP Addresses you are interested in should be presented where the red box is. There is a helpful copy button should you need to quickly copy them.

    So that is great but what if you are quickly wanting to identify the outbound IP address of a web app

    So go to the windows terminal

    and use the nslookup command followed by the domain you wish to identify the IP address of.

    Postscript
    One point is that I seemed to have difficulty registering the outbound IP addresses and it was repeatedly failing to connect a front end application to the Azure Database for Postgres flexible server – I went through my outbound addresses thoroughly and then stopped and started the server itself. Note the restart option wasn’t working for me so I did a complete stop and then start. After this it seemed to work. I will continue to monitor this but its not a process I perform often so it may be sometime before I can truly establish my issue. It should be noted that nslookup did not initially reveal the IP that worked I still don’t know which of the outbounds are working or whether it was the restart that made it work. I did note that the allow all azure connections option always corrected the issue (so this can be used for testing)

    Postgres – Export a Spatial database and then Importing again (pg_dump + psql utilities)

    Lets take this from the beginning. The first thing you need to do is be on the physical machine where the database is running.

    Next check that the database server is up and running

    To do that

    Go to the windows bar click on the search icon and type in

    Services

    Scroll down to P and look for the name postgresql this and look to the Status column in the above example there are two postgres 11 and postgres 14 of the two only 14 shows as Running.

    There are several Utility executables that come with every postgres installation.

    The standard installation set up puts these into a directory equivalent to the following;

    We need to navigate at the command prompt to the location of the executable we are interested in. Technically you could qualify the path to, the executable when running a command, but I tend to navigate to the location.

    So firstly in the search button go to the command prompt

    cmd

    cd C:\Program Files\PostgreSQL\14\bin

    pg_dump -U postgres -W -F p -d dbspatialhub > c:\sql\dbspatialhub.sql

    pg_dump -U postgres -W -F p -d dbspatialhub > c:\sql\dbspatialhub.sql

    and here is a string that works if your database is on the local server with a non standard port / substitute 5433 for your port

    pg_dump -U postgres -h localhost -p 5433 -d dbsource > c:\sql\dbsourceexport.sql

    run this command – you will note that I qualify the output directory here. I prefer to do this as I don’t particularly like writing and deleting files into the program directory. I prefer this directory to be reserved for standard files and reduce the likelihood of me in anyway accidentally deleting an important file.

    You should be presented with the Password: option you will need to input the password for specified User (-U) complete

    Success will be shown as a return to the former prompt with no message.

    and running a DIR on the directory look for the sql file…

    Next from either psql prompt or from within PGAdmin4 create a database you are going to import the sql into and ensure that the postgis extension is installed. Remember this is the psql prompt and not DOS command line prompt.

    CREATE DATABASE dbspatialhubnew
    CREATE EXTENSION postgis
    

    Now you have to go back to the command prompt (and the correct directory).

    If you are on a different port add the following string -p 5434

    psql -U postgres -d dbspatialhubnew -f c:\sql\dbspatialhub.sql

    After being asked for the password the import will start and continue showing a list of various commands and data being executed and imported.
    Similar to below.

    Now you can go back to a client such as QGIS , link to the database to start confirming a successful import.

    Congratulations you have copied a database.

    It should be noted that as it stands I am having problems importing the layers table which has the format of the boundaries for tables.
    I will continue to work on this.

    QGIS – Working with Free BASEMAP Services for Desktop Projects (Bing and Google Maps Aerial and Lines)

    As soon as you need to practically implement any information in Spatial Databases display of the information through a mapping front end becomes absolutely vital. Many database administrators are used to simply going into their favourite database editors and displaying the raw subsets of tables and queries. That works well for financial transactions and inventory tables were collapsing the attributes of objects into single digits is often valuable or possibly preferable to simple photos. When dealing with boundary information the complete opposite applies. Display of information as simple screens of matrix numbers is completely useless.

    Additionally often boundaries make zero sense unless referenced to the land beneath them either through aerial photography or topographic maps.

    In a previous time where I worked we actually commissioned a company to give us aerial photos of a local authority. This was not an insignificant amount of money and was probably only marginally reduced by the vendor having multiple channels of sale. Google and Microsoft are now very good in offering very good aerial and line interpretations for limited use to companies and individuals. This is absolutely great as it can be used as background either to confirm accuracy of other information or as data upon which to calculate further information (eg routing).

    So how can an individual get up and started with some of these basemaps.

    Sometime recently (I know not when exactly) QGIS changed its implementation of Open Street Maps through their desktop – rather than being an additional plugin Open Street Map provision is now included on install.

    Here I am working with QGIS version 3.10

    Now you should be presented with the Data Source Manager Dialog which looks like this

    Now expand the XYZ Tiles

    You can then double click on any one of the three and the WMS will be transferred into the layer panel

    Layer > Data Source Manager > XYZ

    Thank you to Google and Microsoft and OSM and QGIS for a great implementation.

    Long Passwords : Evidence showing why long passwords are a simple security principle

    Creating long passwords is one of the easiest ways of improving the security of your applications and resources – here’s a graphic that estimates time it might take to brute force a password depending on the complexity of the character set and its length.

    Secure Password Generator

    Last Pass Password Generator

    Norton Password Generator

    1Password – random password generator

    bitwarden random password generator

    AVAST Password Generatotr

    MariaDB – Add calculated field that shows day name

    ALTER TABLE tablename 
    ADD COLUMN dayname VARCHAR(20) GENERATED ALWAYS AS (DAYNAME(startdate)) 
    STORED;

    This query will add a new column called dayname to the tablename table. The column will be of type VARCHAR with a maximum length of 20 characters. The GENERATED ALWAYS clause tells MariaDB to calculate the value of the column each time a row is inserted or updated. The AS keyword specifies the expression used to calculate the value, in this case the DAYNAME function applied to the startdate column. Finally, the STORED keyword tells MariaDB to store the calculated value in the table so that it can be retrieved more efficiently.

    Note that the GENERATED ALWAYS and STORED clauses require MariaDB version 5.7.6 or later. If you are using an earlier version of MariaDB, you can still add a calculated field using a trigger or a view.

    MariaDB – Create query that matches value to a range in another table

    For example lets say we were wanting to allocate transactions to a financial year but allow the users to adjust that financial year depending on their circumstances. How could we do that.

    Here we have a table called
    t0165financeyear
    whose structure is

    pkid
    taxyear
    startdate
    enddate

    And a table of transactions called t023finance

    CREATE VIEW v0004financeyear
    AS SELECT a.pkid pkidt0023,
    a.banked,
    a.transactiondate,
    a.description,
    a.category,
    a.Direction,
    a.hmrcint,
    a.pkidt0001,
    a.pkidt0007,
    a.vamount,
    a.invoiceno,
    a.pkidt0011,
    a.dupdated,
    a.dcreated,
    b.pkid pkidt00165,
    b.taxyear from t0023finance a, t00165taxyears b
    where
    a.banked between b.startdate and b.enddate

    Remarkably simple – I suspect that we might get cartesian join issues if your start date and end dates in the t00165 table overlap.

    MS SQL Azure – TSQL to identify all Triggers in a database / Drop Trigger TSQL syntax

    I had been told that it was better not to use Triggers..

    Nonetheless I had to learn the hard way.

    So fast forward a period and I’m wanting to remove triggers from a database and I first need to find where I had put them in the first instance.

    SELECT 
         sysobjects.name AS trigger_name 
        ,USER_NAME(sysobjects.uid) AS trigger_owner 
        ,s.name AS table_schema 
        ,OBJECT_NAME(parent_obj) AS table_name 
        ,OBJECTPROPERTY( id, 'ExecIsUpdateTrigger') AS isupdate 
        ,OBJECTPROPERTY( id, 'ExecIsDeleteTrigger') AS isdelete 
        ,OBJECTPROPERTY( id, 'ExecIsInsertTrigger') AS isinsert 
        ,OBJECTPROPERTY( id, 'ExecIsAfterTrigger') AS isafter 
        ,OBJECTPROPERTY( id, 'ExecIsInsteadOfTrigger') AS isinsteadof 
        ,OBJECTPROPERTY(id, 'ExecIsTriggerDisabled') AS [disabled] 
    FROM sysobjects 
    /*
    INNER JOIN sysusers 
        ON sysobjects.uid = sysusers.uid 
    */  
    INNER JOIN sys.tables t 
        ON sysobjects.parent_obj = t.object_id 
    
    INNER JOIN sys.schemas s 
        ON t.schema_id = s.schema_id 
    WHERE sysobjects.type = 'TR'
    

    And running this against your database you should see something like this.

    And how you drop the triggers

    IF OBJECT_ID ('sampledelete2chargehistory', 'TR') IS NOT NULL  
       DROP TRIGGER sampledelete2chargehistory;

    Thanks to Stack Overflow – I still get some answers from it over AI

    Stackoverflow

    SQL Server Express – starting to work with Geometry Functions in SQL Server

    Firstly I created a database.

    This is essentially me trying to implement in SQL Server what I had written about in Postgres here – Using ST_Within in Postgres

    Now I create the starting tables.

    CREATE TABLE [dbo].[t00001fields](
    	[pkid] [int] IDENTITY(1,1) NOT NULL,
    	[fieldname] [varchar](50) NULL,
    	[geom] [geometry] NULL
    

    Then

    CREATE TABLE [dbo].[t00002plots](
    	[pkid] [int] IDENTITY(1,1) NOT NULL,
    	[plotname] [varchar](50) NULL,
    	[geom] [geometry] NULL
    

    Now a magic junction table that will identify what plot is within which field when it is run

    CREATE VIEW v001FieldPlotJunction AS
    SELECT t00002plots.pkid as Plotspkid,t00001fields.pkid as Fieldspkid
    FROM
    t00002plots,
    t00001fields
    WHERE
    t00002plots.GEOM.STWithin(t00001fields.GEOM) = 1;
    

    and here’s an example of it running on local in SQL Express

    And this is what I am seeing when viewed through QGIS = many of the fields are smaller than the plots and so although 1 2 look like they should be in the view because the plot is not WITHIN the field it does not show which is the correct response in this case.

    Batch File to split CSV into smaller parts

    So I am using CHATGPT regularly now and this is a good example of where it can prove very useful. The bottom bat iteration was obtained from StackOverflow and successfully split a large file into chunks of 2,500. I then realised that each sub file needed to have headers in it to assist in importing the information into a database. I asked CHATGPT to alter the starting file to ensure that headers were included subsequently.

    It did it no problem..

    @echo off
    
    setlocal ENABLEDELAYEDEXPANSION
    REM Edit this value to change the name of the file that needs splitting. Include the extension.
    
    SET BFN=C:\csv\target.csv
    
    REM Edit this value to change the number of lines per file.
    SET LPF=2500
    
    REM Edit this value to change the name of each short file. It will be followed by a number indicating where it is in the list.
    SET SFN=SplitFile
    
    REM Do not change beyond this line.
    
    SET SFX=%BFN:~-3%
    SET /A LineNum=0
    SET /A FileNum=1
    set "header="
    for /f "tokens=* usebackq" %%a in ("%BFN%") do (
    
    if not defined header (
    set "header=%%a"
    echo !header! > %SFN%!FileNum!.%SFX%
    ) else (
    SET /A LineNum+=1
    echo %%a >> %SFN%!FileNum!.%SFX%
    if !LineNum! EQU !LPF! (
    SET /A LineNum=0
    SET /A FileNum+=1
    echo !header! > %SFN%!FileNum!.%SFX%
    )
    )
    
    )
    
    endlocal
    
    Pause
    

    This is the original obtained from Stack Overflow..

    @echo off
    
    setlocal ENABLEDELAYEDEXPANSION
    
    REM Edit this value to change the name of the file that needs splitting. Include the extension.
    SET BFN=C:\csv\target.csv
    
    REM Edit this value to change the number of lines per file.
    SET LPF=2500
    
    REM Edit this value to change the name of each short file. It will be followed by a number indicating where it is in the list.
    SET SFN=SplitFile
    
    REM Do not change beyond this line.
    
    SET SFX=%BFN:~-3%
    SET /A LineNum=0
    SET /A FileNum=1
    For /F "delims==" %%l in (%BFN%) Do (
    SET /A LineNum+=1
    echo %%l >> %SFN%!FileNum!.%SFX%
    if !LineNum! EQU !LPF! (
    SET /A LineNum=0
    SET /A FileNum+=1
    
    )
    
    )
    
    endlocal
    
    Pause
    

    Methodology Compound Interest Calculation (Updated October 2023)

    A formula that can be used for calculating compound interest

    A = P( 1 + (r/n) )^nt ; R = r * 100
    
    Where:
    A= Total Accrued Amount (Principal + Interest)
    P = Principal Amount
    R = Rate of Interest per year as a decimal 
    ; r = R/100 so 4% is 4 and r would be 0.04
    t = number of periods
    n = compounding period
    (^ indicates to the power of)
    
    Note: Remember BODMAS when calculating with variables

    Reasoning – this breaks interest down into individual compounding periods. For example, within a year when dealing with months, r would be divided by 12 if the source of interest is annual based. The individual periods are then compounded using the power over all the periods eg 24 periods for 2 years – see the examples below. NOTE the r/n is an empirical adjustment required because quoted interest rates from most sources are over a year (annual). Whenever obtaining interest rates from a source we must be careful with the r/n calculation as it may need to be ignored. If for some reason the period over which interest is defined in the source list is anything other than annually this r/n calculation may require alteration..

    UK Base Interest Rates source
    Bank of England Base Rate

    XLS File of Base Rate changes since 1664 to June 2022
    Note – I took a snap shot at June 2022 for latest go to Bank of England base rate where there is(at October 2022) a constantly updated xls file.

    So for example if we want to calculate interest on £100,000 over a period of 4 years and 8 months based on an interest rate of 4.0% over the base of 0.5% over differing compound periods;
    Compounded annually;
    A = 100,000 ( 1 + (0.045/1) )^4.67 = £122,821.10
    A = £122,821.10

    Compounded Monthly;
    A = 100,000 ( 1 + (0.045/12) )^56 = £123,319.40
    A = £123,319.40

    Compounded Daily
    A = 100,000 ( 1 + (0.045/365) )^1704 = 123,376.30
    A = £123,376.30
    Please note : simplification this calculation leap years re-calculate if important
    (^ indicates to the power of)

    Methodology – How to calculate the Interest rate to be used

    At the Bank of England the Monetary Policy Committee (MPC) made up of appointed members set the bank of England Base rate. When they do this it normally makes the national news. For example on the 3rd of August 2023 the Monetary Policy committee set the interest rate at 5.25%. This is the interest figure I use in the above compound interest rate when I need to calculate interest to be added by the council for holding money over a given period. But because for any given period it is unlikely that interest rates will have been uniform for the entire period it is necessary to either calculate one figure for the entire period or alternatively calculate the interest on the principle for each sub period and aggregate the figure for each of the periods. Below shows a method of calculating a single average daily interest rate for any period given changes to the rates.

    So for example:

    Calculate the interest rate to be used for a return of developer contributions received on the 11 May 2023 and required to be returned on the 18 August 2023 in the knowledge that the interest rate on the 11 May 2023 was 4.5% but was raised by the MPC to 5% on 22 June 2023 and then raised to 5.25% on the 03 August 2023. Develop a framework that can be used to calculate an applicable interest rate that can be used in the compounding interest rate calculation.

    Average interest rate over this period is calculated as 477.75/99 or 4.825757%

    This figure is consistent with the reporting by the Bank of England which changes rates on individual days rather than at the end of years or months. As such we should use the Compounding on a Daily basis to calculate the compounded interest to be returned as a result of holding this money for 99 days.

    Normally, for developer contributions, additional interest will be calculated over periods of years rather than mere days but the above template can be used for any given time period. Additionally and importantly it calculates the total number of days which is also required for the compound interest rate calculation.

    MS Azure – SQL Azure – Reliability update 001 (38 days) – using Azure Logic App to execute a Stored Procedure

    So a month ago in an non critical application I set up a trigger for a stored procedure to copy records (about 3000) from one table to another 6 times a day.

    Approximately 38 days in and I have 100% reliability.

    I have firewalled the required IP addresses listed by Microsoft
    Azure Logic Apps list of IPs for Firewalls

    I will keep running this for testing purposes to identify the reliability and report back

    MS Azure – TSQL a simple example using a Common Table Expression (CTE)

    A common table expression (CTE) is a temporary result set that you can reference within a SELECT, INSERT, UPDATE, DELETE, or CREATE VIEW statement. A CTE is similar to a derived table in that it is not stored as an object and exists only for the duration of the query.

    I’m finding them useful as you can use them within a view to contain all the references required for that particular view. This I feel makes things more maintainable.

    An example of a simple common table expression (CTE) in T-SQL:

    WITH CTE_EmployeeSalaries AS
    (
        SELECT EmployeeID, Salary
        FROM Employees
        WHERE Salary > 50000
    )
    SELECT *
    FROM CTE_EmployeeSalaries;
    

    This CTE defines a virtual table named CTE_EmployeeSalaries that contains all employees with a salary greater than 50,000. The CTE is then used to select all rows from this virtual table.

    Common table expressions (CTEs) have several advantages:

    1.Improved readability: can make complex queries easier to read and understand by breaking them up into smaller, more manageable pieces.
    2.Improved maintainability: Because CTEs are self-contained, they can be easily modified or replaced without affecting the rest of the query.
    3.Improved performance: In some cases can improve query performance by reducing the need for subqueries.
    4.Recursive queries: CTEs can be used to perform recursive queries, which are useful for querying hierarchical data such as an organizational chart or a bill of materials.
    5.Temporary results: can be used to store intermediate results that are used multiple times within a single query. This can be more efficient than repeatedly calculating the same intermediate results.
    6.Better alternative to view: CTEs can be used as an alternative to views, which can be difficult to modify and are not always optimized for performance.
    7.Reusable: CTEs can be defined once and used multiple times within a single query or in multiple queries within the same session.

    MS Azure TSQL writing comments

    In TSQL, comments can be added to code in two ways:

    1.Single-line comments: These start with two hyphens (–) and continue until the end of the line. For example:

    -- This is a single-line comment
    

    2.Multi-line comments: These start with a forward slash and an asterisk (/) and continue until the closing asterisk and forward slash (/). For example:

    /*
    This is a multi-line comment. It can span
    multiple lines and is often used to provide
    detailed explanations or to comment out large
    blocks of code.
    */
    

    When writing comments, it’s important to keep them clear and concise, and to use them to explain why something is being done rather than how it is being done. This helps to make your code more readable and easier to understand for other developers who may be reading it.