SQL Server / Postgres – Pattern for Distinct List / Combo Box source

This primarily refers to variable text fields but in all of the CRUD systems I’ve ever designed every single page has contained fields that use lookup dialogs. Their sheer ubiquity makes their design and operation particularly important significantly impacting an applications feel to users.

In principle sourcing the values for a lookup appears to be a very simple choice.

  • Hard coded through the user interface (e.g Y/N)
  • Referenced to a separate “lookup” table

  • As you design more systems you start to see further important options

  • Referenced to distinct values sourced from a self referencing query of the table
  • Whether to store values as foreign keys
  • Whether to enforce validation at the database level.

  • Some general rules I follow;

    1)If the application allows users to bulk import data into a table consider using referencing to a distinct values list sourced from a self referencing query of the table.
    2)Almost NEVER alias values / use foreign keys in tables that will receive imported data but often better not to use it even if there is no import facility.
    3)The one exception to rule 2 can be customer account codes – account codes for clients can be incredibly important and often link to a table that specifies say a company name and address – things that may change in time. Users really need to be trained to understand the derivation of foreign keys to use systems or the codes will generally break import routines or break referencing and require correction. Account codes tend to be a weird case of a derived surrogate where the value is obviously derived from the lengthened value – e.g AMZN for Amazon or MSFT for Microsoft in NASDAQ listings. This works for NASDAQ because of its ubiquity and because NASDAQ has educated its users be prepared that you will need to educate your users if you use this pattern. For this reason my general rule is only use foreign key aliases for Account codes.
    4)If you have a field or table that will receive values which may be the subject of a lookup – don’t be too harsh with the field validation in the database data definition and be more permissive with the field variable type and length than you might expect.

    And here is suggested SQL that can be used to identify distinct values in a field.

    SELECT DISTINCT account_code FROM table_name;
    

    SQL Azure – Using a Trigger to watch changes on fields

    The below trigger monitors a table called t001Table4Monitoring and four particular fields / 3 date fields and one integer field. It monitors the trigger table and compares new values to old values using simple subtraction. As a comparison between null and a value will always return null we need to catch situations were a field is changed from null or to null hence the ISNULL section. The record has a bit field that is updated to True if change is detected.

    This code was created with the help of GPT-4

    CREATE TRIGGER [dbo].[trg_Update_t001Table4Monitoring]
    ON [dbo].[t001Table4Monitoring]
    AFTER UPDATE
    AS
    BEGIN
        IF UPDATE(Date01Monitor) OR UPDATE(Date02Monitor) OR UPDATE(Date03Monitor) OR UPDATE(IntegerMonitor)
        BEGIN
            -- Get old and new values
            DECLARE @old_Date01Monitor DATE, @new_Date01Monitor DATE;
            DECLARE @old_Date02Monitor DATE, @new_Date02Monitor DATE;
            DECLARE @old_Date03Monitor DATE, @new_Date03Monitor DATE;
            DECLARE @old_IntegerMonitor INT, @new_IntegerMonitor INT;
    
            SELECT @old_Date01Monitor = Date01Monitor, @old_Date02Monitor = Date02Monitor, @old_Date03Monitor = Date03Monitor, @old_IntegerMonitor = IntegerMonitor FROM deleted;
            SELECT @new_Date01Monitor = Date01Monitor, @new_Date02Monitor = Date02Monitor, @new_Date03Monitor = Date03Monitor, @new_IntegerMonitor = IntegerMonitor FROM inserted;
    
            -- Check if values have changed
            IF (ISNULL(@old_Date01Monitor, '1900-01-01') <> ISNULL(@new_Date01Monitor, '1900-01-01')) OR 
               (ISNULL(@old_Date02Monitor, '1900-01-01') <> ISNULL(@new_Date02Monitor, '1900-01-01')) OR 
               (ISNULL(@old_Date03Monitor, '1900-01-01') <> ISNULL(@new_Date03Monitor, '1900-01-01')) OR 
               (ISNULL(@old_IntegerMonitor, 0) <> ISNULL(@new_IntegerMonitor, 0))
            BEGIN
                UPDATE dbo.t001Table4Monitoring
                SET RecordChanged = 1
                FROM dbo.t001Table4Monitoring t
                INNER JOIN inserted i ON t.pkid = i.pkid
            END
        END
    END;

    SQL Azure – Adjusting for different timezones a practical example

    Here I create a main table which can be used to translate times into a UTC value using a function I store here. It should be noted that if you have a field with variable type set to datetime and you set the default to GetDate() – the server will create a UTC value on record creation but time is set on the server and with SQL Azure you don’t have an option to change it.

    CREATE TABLE [dbo].[MainTable](
    	[pkid] [int] IDENTITY(1,1) NOT NULL,
    	[storeddatetime] [datetime] NULL,
    	[timezonename] [nvarchar](128) NULL
    PRIMARY KEY CLUSTERED 
    (
    	[pkid] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    
    ALTER TABLE [dbo].[MainTable] ADD  CONSTRAINT [DF_MainTable_storeddatetime]  DEFAULT (getdate()) FOR [storeddatetime]
    GO
    

    Next we create a timezone table

    CREATE TABLE [dbo].[TimeZoneTable](
    	[pkid] [int] IDENTITY(1,1) NOT NULL,
    	[timezonename] [nvarchar](128) NULL,
    PRIMARY KEY CLUSTERED 
    (
    	[pkid] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    

    And we can populate this from sys.time_zone_info table.

    INSERT INTO TimeZoneTable (timezonename)
    SELECT name FROM sys.time_zone_info;

    We can create a function to convert this datetime value knowing the timezone into UTC is as follows

    CREATE FUNCTION dbo.fn_ConvertToUTC (@localDatetime DATETIME, @timeZone NVARCHAR(128))
    RETURNS DATETIME
    AS
    BEGIN
        DECLARE @utcDatetime DATETIME;
        SET @utcDatetime = @localDatetime AT TIME ZONE @timeZone AT TIME ZONE 'UTC';
        RETURN @utcDatetime;
    END;
    

    Now runnig the following query on some example data we get the following

    SELECT 
        pkid, 
        storeddatetime, 
        timezonename, 
        dbo.fn_ConvertToUTC(storeddatetime, timezonename) AS utcValue
    FROM 
        MainTable;
    

    And here is a function that will convert to local time from utc

    CREATE FUNCTION dbo.fn_ConvertToLocal (@utcDatetime DATETIME, @timeZone NVARCHAR(128))
    RETURNS DATETIME
    AS
    BEGIN
        DECLARE @localDatetime DATETIME;
        SET @localDatetime = @utcDatetime AT TIME ZONE 'UTC' AT TIME ZONE @timeZone;
        RETURN @localDatetime;
    END;

    Javascript option
    The getTimezoneOffset() method of the Date object in JavaScript calculates the difference, in minutes, between the local time zone and the UTC time zone 1. The method returns a positive value if the local time zone is behind UTC, and a negative value if the local time zone is ahead of UTC 1.

    The method works by subtracting the local time zone offset from UTC time. For example, if the local time zone is 5 hours behind UTC, then the method will return 300 (5 * 60) 2

    MS SQL Azure – Computed Column – Create UTC Datetime value from Datetime & Timezone fields – Daylight Saving

    This is not tested at time of writing obtained from ChatGPT 4

    Answer
    Sure, here’s an example of how you might implement the ConvertToUTC function to handle time zones like ‘London/Europe’. This example uses the AT TIME ZONE clause, which is available in SQL Server 2016 and later:

    CREATE FUNCTION dbo.ConvertToUTC (@localDateTime DATETIME, @timeZone NVARCHAR(50))
    RETURNS DATETIME
    AS
    BEGIN
        -- Convert the local datetime to a datetimeoffset value using the specified time zone
        DECLARE @datetimeoffset DATETIMEOFFSET = @localDateTime AT TIME ZONE @timeZone
        -- Convert the datetimeoffset value to UTC
        DECLARE @utcDateTime DATETIME = CONVERT(DATETIME, SWITCHOFFSET(@datetimeoffset, 0))
        RETURN @utcDateTime
    END

    You can then use this function in a computed column definition:

    ALTER TABLE YourTable
    ADD UtcDateTime AS dbo.ConvertToUTC(field01, field02)

    In this example, field02 should contain time zone names that are valid for the AT TIME ZONE clause. For example, ‘UTC’, ‘Pacific Standard Time’, ‘Central European Standard Time’, etc. For London, you would use ‘GMT Standard Time’.

    Please note that this is just a basic example. The actual implementation would depend on how you’re storing time zone information and the specific requirements of your application. It’s also worth noting that handling date and time conversions can be complex due to factors like daylight saving time, so it’s important to thoroughly test your solution to ensure it handles all edge cases correctly.

    If you’re frequently performing these types of conversions, you might want to consider storing datetime values in UTC directly in your database. This can simplify your data model and avoid potential issues with time zone conversions.

    Question
    Does AT TIME ZONE have a default value that will alter depending on whether the UK is under British Summer time?

    Answer
    Yes, the AT TIME ZONE clause in SQL Server does take daylight saving time into account. When you use a time zone name with this clause, it will automatically adjust the time based on the current daylight saving time rules for that time zone.

    For example, if you use ‘GMT Standard Time’ (which is the time zone name for UK time in SQL Server), it will automatically show the correct offset for daylight saving when it is in effect.

    Here’s an example:

    SELECT GETDATE() AT TIME ZONE 'GMT Standard Time'

    This will return the current date and time on the database BUT in my testing does NOT adjust the time to the timezone but will show the correct time zone OFFSET for the date for corrected for daylight saving i.e Greenwich Mean Time (GMT) or British Summer Time (BST), depending on the date.

    Please note that the actual time zone names and their daylight saving time rules are based on the time zone data on the server where SQL Server is running. This data is updated through the Windows operating system, so it’s important to keep your servers up-to-date to ensure that the time zone rules are accurate.

    And here are what I think are the supported time zones
    MS Azure supported time zone values

    Post-script
    A lot of online editors won’t let you run the At Time Zone function but this one does..

    dofactory

    Comparing the time to my location and the time where I am I can tell that this is Pacific Standard Time – i.e Western America – California perhaps.

    We also note that the time isn’t adjusted to the stated timezone but we do see the offset (see +01:00 in record return) Again this is because per se it doesn’t tell you what the server is set to but by using the sysdatetimeoffset we can correct the server time back to UTC and the adjust for timezone see second code example. The Timezone although included in the time is again be stated.

    SELECT SYSDATETIMEOFFSET() AT TIME ZONE 'W. Europe Standard Time' as WEuropeTime, 
    SYSDATETIMEOFFSET() AT TIME ZONE 'GMT Standard Time' as GMTStandard

    And so there is a direction to timezone switch – In the above we have mainly been switching from UTC to a timezone but the below switches from a timezone to UTC which is what we will need if we are storing the input as a datetime and a separate timezone for each record.

    SELECT SWITCHOFFSET(GetDate() AT TIME ZONE 'Pacific Standard Time', '+00:00')
    

    Remember though timezones are held outside SQL Server databases on the server and as such are non deterministic. This is a good demonstration of determinism in practice

    Deterministic algorithms are entirely predictable and always produce the same output for the same input.

    Non-deterministic algorithms may produce different outputs for the same input due to random events or other factors.

    Databases and Time – Time Zones are not Datatypes

    When working with time in databases and web applications we should recognise that time zones are NOT a datatype. They represent an important part of a datetime value that links to a datetime value and may represent the time on the server or the time in the browser or some time in a particular country for an attribute of a record. Simply viewing a datetime value individually is incomplete if users are entering data from all over the world. If this is not designed into the schema from the beginning problems may be presented when filtering and ordering records based on datetime.

    Does your application have users within the same or different time zones? Is your database hosted in the same or a different time zone?

    SQL Azure database servers for instance only allow for a default time zone set to UTC. This means wherever you are in the world if you set a default value on an datetime field new records will record time at UTC. This can present issues where users submit values based on a different localised time.

    Default values that are set to GetDate() can be out by a given time offset. Additionally web interfaces which set the datetime value at input may be set by the browser and if the browser is not on UTC that time will use a clock that may be forward or behind UTC. What’s the solution? My suggestion is to record the time zone in a separate field per record to the datetime value and then use a view to convert all datetimes to a common time zone field (probably best UTC). At least that should reduce complexity for users. Developer and users should be aware as this could be confusing when a user inputs a time into a database and then is presented with the UTC value when viewing the value.

    It should be noted that Azure Logic Apps are on a server that is not set to UTC but local time (in my case London / Europe) I suspect that whatever local you set will be affect the local time. This led to a situation where in British Summer Time a logic app procedure triggered for 10pm was showing 9pm in the database – the BST offset.

    Repeat after me – Time Zones are not Datatypes

    And here is a nice site that identifies differences from UTC of timezones based on URL parameters

    Time around the world
    British Summer Time – 10am UTC

    Eastern Seaboard Time – 10am UTC

    Sydney Australia – Aussie Eastern Daylight Time – 10am UTC

    MS SQL Azure – TSQL to identify all Triggers in a database / Drop Trigger TSQL syntax

    I had been told that it was better not to use Triggers..

    Nonetheless I had to learn the hard way.

    So fast forward a period and I’m wanting to remove triggers from a database and I first need to find where I had put them in the first instance.

    SELECT 
         sysobjects.name AS trigger_name 
        ,USER_NAME(sysobjects.uid) AS trigger_owner 
        ,s.name AS table_schema 
        ,OBJECT_NAME(parent_obj) AS table_name 
        ,OBJECTPROPERTY( id, 'ExecIsUpdateTrigger') AS isupdate 
        ,OBJECTPROPERTY( id, 'ExecIsDeleteTrigger') AS isdelete 
        ,OBJECTPROPERTY( id, 'ExecIsInsertTrigger') AS isinsert 
        ,OBJECTPROPERTY( id, 'ExecIsAfterTrigger') AS isafter 
        ,OBJECTPROPERTY( id, 'ExecIsInsteadOfTrigger') AS isinsteadof 
        ,OBJECTPROPERTY(id, 'ExecIsTriggerDisabled') AS [disabled] 
    FROM sysobjects 
    /*
    INNER JOIN sysusers 
        ON sysobjects.uid = sysusers.uid 
    */  
    INNER JOIN sys.tables t 
        ON sysobjects.parent_obj = t.object_id 
    
    INNER JOIN sys.schemas s 
        ON t.schema_id = s.schema_id 
    WHERE sysobjects.type = 'TR'
    

    And running this against your database you should see something like this.

    And how you drop the triggers

    IF OBJECT_ID ('sampledelete2chargehistory', 'TR') IS NOT NULL  
       DROP TRIGGER sampledelete2chargehistory;

    Thanks to Stack Overflow – I still get some answers from it over AI

    Stackoverflow

    SQL Server Express – starting to work with Geometry Functions in SQL Server

    Firstly I created a database.

    This is essentially me trying to implement in SQL Server what I had written about in Postgres here – Using ST_Within in Postgres

    Now I create the starting tables.

    CREATE TABLE [dbo].[t00001fields](
    	[pkid] [int] IDENTITY(1,1) NOT NULL,
    	[fieldname] [varchar](50) NULL,
    	[geom] [geometry] NULL
    

    Then

    CREATE TABLE [dbo].[t00002plots](
    	[pkid] [int] IDENTITY(1,1) NOT NULL,
    	[plotname] [varchar](50) NULL,
    	[geom] [geometry] NULL
    

    Now a magic junction table that will identify what plot is within which field when it is run

    CREATE VIEW v001FieldPlotJunction AS
    SELECT t00002plots.pkid as Plotspkid,t00001fields.pkid as Fieldspkid
    FROM
    t00002plots,
    t00001fields
    WHERE
    t00002plots.GEOM.STWithin(t00001fields.GEOM) = 1;
    

    and here’s an example of it running on local in SQL Express

    And this is what I am seeing when viewed through QGIS = many of the fields are smaller than the plots and so although 1 2 look like they should be in the view because the plot is not WITHIN the field it does not show which is the correct response in this case.

    MS Azure – SQL Azure – Reliability update 001 (38 days) – using Azure Logic App to execute a Stored Procedure

    So a month ago in an non critical application I set up a trigger for a stored procedure to copy records (about 3000) from one table to another 6 times a day.

    Approximately 38 days in and I have 100% reliability.

    I have firewalled the required IP addresses listed by Microsoft
    Azure Logic Apps list of IPs for Firewalls

    I will keep running this for testing purposes to identify the reliability and report back

    MS Azure – TSQL a simple example using a Common Table Expression (CTE)

    A common table expression (CTE) is a temporary result set that you can reference within a SELECT, INSERT, UPDATE, DELETE, or CREATE VIEW statement. A CTE is similar to a derived table in that it is not stored as an object and exists only for the duration of the query.

    I’m finding them useful as you can use them within a view to contain all the references required for that particular view. This I feel makes things more maintainable.

    An example of a simple common table expression (CTE) in T-SQL:

    WITH CTE_EmployeeSalaries AS
    (
        SELECT EmployeeID, Salary
        FROM Employees
        WHERE Salary > 50000
    )
    SELECT *
    FROM CTE_EmployeeSalaries;
    

    This CTE defines a virtual table named CTE_EmployeeSalaries that contains all employees with a salary greater than 50,000. The CTE is then used to select all rows from this virtual table.

    Common table expressions (CTEs) have several advantages:

    1.Improved readability: can make complex queries easier to read and understand by breaking them up into smaller, more manageable pieces.
    2.Improved maintainability: Because CTEs are self-contained, they can be easily modified or replaced without affecting the rest of the query.
    3.Improved performance: In some cases can improve query performance by reducing the need for subqueries.
    4.Recursive queries: CTEs can be used to perform recursive queries, which are useful for querying hierarchical data such as an organizational chart or a bill of materials.
    5.Temporary results: can be used to store intermediate results that are used multiple times within a single query. This can be more efficient than repeatedly calculating the same intermediate results.
    6.Better alternative to view: CTEs can be used as an alternative to views, which can be difficult to modify and are not always optimized for performance.
    7.Reusable: CTEs can be defined once and used multiple times within a single query or in multiple queries within the same session.

    MS Azure TSQL writing comments

    In TSQL, comments can be added to code in two ways:

    1.Single-line comments: These start with two hyphens (–) and continue until the end of the line. For example:

    -- This is a single-line comment
    

    2.Multi-line comments: These start with a forward slash and an asterisk (/) and continue until the closing asterisk and forward slash (/). For example:

    /*
    This is a multi-line comment. It can span
    multiple lines and is often used to provide
    detailed explanations or to comment out large
    blocks of code.
    */
    

    When writing comments, it’s important to keep them clear and concise, and to use them to explain why something is being done rather than how it is being done. This helps to make your code more readable and easier to understand for other developers who may be reading it.

    SQL Azure – Using Azure Logic Apps to execute a Stored Procedure in SQL Azure

    It is important in a lot of systems to be able to run queries on a regular schedule. Previously I have outlined that this can be done using an automation account. It is also possible to accomplish this using an Azure Logic App to call a Stored Procedure.

    The procedure is
    Create Logic App
    Add items to it

    Firstly using SSMS connect to the database and create an example stored procedure.

    CREATE PROCEDURE [dbo].[TestInsert]
    AS
    BEGIN
    SET NOCOUNT ON
    
    INSERT INTO dbo.T0003NLPRSitesTarget
    (sitename)
    SELECT Sitename from dbo.T0003NLPRSites
    END
    GO

    Next log into your Azure Portal and identify the following icon

    Select it and then hit the Add button..

    You will be presented with the following screen

    Now you should be taken to an overview showing that the logic app has been created

    Hit Go to resource

    You should be presented with the following screen

    Here we choose Recurrence

    You will be presented with the following Logic Apps Designer

    Here is an example
    Interval is like how many times the logic app will run over what Frequency
    So if we put 3 in Interval and Frequency Day it would run once every 3 days
    At these hours – sets specific times when the logic app will run within the set Frequency
    On these days – allows for you to set what days the logic app will run At these hours

    To explain the logic app running work from the bottom up. So the below example

    On Monday, Tuesday , Wednesday, Thursday and Friday at 9,10,11,12,13,14,15,16 and 17 hours

    Every week this will run (If Interval had been set to 3 it would have been every three weeks on Monday,Tuesday,Wednesday, Thursday and Friday at the hours 9,10,11,12,13,14,15,16 and 17 hours it would run.

    Now hit next step and then type in execute stored procedure

    You will first need to set up the connection to the database and then enter the parameters of the server database name database username and password or your chosen security to connect to the database and then you will be taken to the appropriate item.

    You will need to set up the firewall on the server to ensure otherwise the logic app is likely to stop working at somepoint..

    You can get a list of the appropriate firewall rules to set up by going here..

    Managed connectors outbound IP addresses

    So after further testing with the following parameters I discovered that in practice SQL Azure is a bit loose about the hours recurrence option. For example the following

    Has been runnnig at 43minutes past the hour. Which is indicative of when I hit a run trigger at..

    I will be checking to see when it next runs but I suspect it is going to be 17:43

    SQL Azure (Basic notes on backup operation)

    If you are running things on the cloud you always want to be thinking about backups and ensure that you are really aware of what is happening with backups and how to restore from backups. You should regularly check what is happening with backups and regularly practice restoring databases.

    Basic / Standard and Premium SQL Azure has two basic options for restore.

    Point in Time Restore and
    Long Term Retention

    They perform the same task – they collect old databases that can be used to restore to a server if need be but what varies in the UI by which you restore them.

    First point – Long Term Retention Policy is attached to the server and you go into it by first navigating to your server and then looking to the left hand section marked as Data management.

    Select Backups and you will be presented with a list of all the long term backups of the databases on that server. The following are a couple of basic dtu databases and what can bee seen on the fictional server 01SeverName

    In the above table you can see that Long Term Retention policy has NOT been set. How do you change this.

    Firstly ensure that you are on the retention policies tab and then tick the particular database you would like to set the retention policy for in the lower section this will allow you to Configure a policy for that particular database.

    Here are the options for a BASIC DTU database

    Compare this with the options for a database in the Standard Tier

    Set it up and then hit apply and you should be good to go.

    MS SQL Azure – CREATE USER with read only access to 2 tables

    In a lot of situations you may need a satellite website feeding off the main website with greatly reduced access.

    It is a good idea to only give that user access to only what they need and limit it down to the bare minimum.
    Here I have an azure database called LandRegister and I connect to it in the usual way through SSMS.

    Use LandRegister

    USE LandRegister
    GO
    
    CREATE user TestUserRestricted with password = 'Password01';

    Next you can go to the security section after a refresh to see if the user has been added.

    And then grant access to the tables you wish them to see.

    USE LandRegister
    GO
    
    GRANT SELECT on dbo.T0003NLPRSites to TestUserRestricted
    GRANT SELECT on dbo.T0024Authority to TestUserRestricted;

    TestUserRestricted can now be used by your web application without any fear that
    1) A USER cannot login to a server through SQL Server Management Studio
    2) That they can see anything except the tables listed.
    3) That in this case they can do anything except select statements on the stated tables. (So no write delete or updates on table)

    Here I give full access to the table

    USE LandRegister
    GO
    
    GRANT INSERT,SELECT,DELETE,UPDATE ON 
    dbo.T039Country to TestUserRestricted;

    And it works with views as well

    Use LandRegister
    GO
    
    GRANT SELECT on dbo.v032 to TestUserRestricted;
    

    MS Azure & Web Applications – TSQL Create a check constraint on nvarchar column specifying a minimum length in DB and through the front end ensure enforce OWASP security

    Double security!
    Enforce length in database and then through the front end by validating in this case password values (PHPRunner example).

    First the backend
    You have a nvarchar(128) column in MS Azure SQL Database (or any Microsoft SQL Server database) and you wish to add a check constraint that does not allow a value in the column to be less than lets say 15 characters long.

    tablename = users
    field = userpassword

    ALTER TABLE dbo.users
    ADD CONSTRAINT [MinLengthConstraint] 
    CHECK (DATALENGTH([userpassword]))>14;
    

    And then a good way to check the constraints on an individual table using TSQL through SSMS for MS AZURE.

    SELECT TABLE_NAME, 
    CONSTRAINT_TYPE, 
    CONSTRAINT_NAME 
    FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS 
    WHERE TABLE_NAME ='users';
    

    Front End
    Now for double security in your web application use a regular expression to prevent users entering a weak password.

    ^(?=.*\d)(?=.*[a-z])(?=.*[A-Z]).{15,128}$
    
    ^               # start of the string
    (?=.*\d)        # a digit must occur at least once
    (?=.*[a-z])     # a lower case letter must occur at least once
    (?=.*[A-Z])     # an upper case letter must occur at least once
    .{15,128}       # 15-128 character password must be a minimum of 15 characters and a maximum of 128
    $               # end of the string
    

    OWASP – Open Web Application Security Project

    SQL Azure – Creating a numbers table using a Cartesian Product and then expanding dates between two field values for individual records

    The following code uses a Common Table Expression to create a numbers table ,which is then used to expand all dates between two date values of an individual record. This structure could be used in a variety of circumstances where you are needing to charge costs to a product over a period of time or if you are needing to record consecutive dated values between two dates.

    I am using Whisky Casks as an example here. Imagine a cask came in on a day and needs to be matured for a period and there will be a charge for every day that the cask is being stored to an account. Clearly going forward you might want to set up an overnight process to create the daily charges. But what happens where you wish to retrospectively create charges. Charge records will build up quickly and it will be difficult to manually keep on top of the required record correction.

    Firstly I create a table that will hold a single record for each date a cask is in bond between the two tables. There are additional columns here as I need them for my use case (casks in bond).

    SET ANSI_NULLS ON
    GO
    
    SET QUOTED_IDENTIFIER ON
    GO
    
    CREATE TABLE [dbo].[t066tempexpand](
    	[pkid] [int] IDENTITY(1,1) NOT NULL,
    	[chargedate] [datetime] NULL,
    	[accountcode] [nvarchar](16) NULL,
    	[pkidt002] [int] NULL,
    	[pkidt053dailystoragecharge] [int] NULL,
    	[locationwarehouse] [nvarchar](50) NULL,
     CONSTRAINT [PK_0000temp] PRIMARY KEY CLUSTERED 
    (
    	[pkid] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO

    Next we create a CTE numbers table and use this to explode a single record for each dated day between the two dates in this case bondindate and a disgorge date. I have an extra filter here that filters what casks are being put into the table (here pkid>2000). Other filters could be where disgorgedate is not null. The following line will be used to identify the charge date.
    dateadd(day,v.number,d.bondindate)
    and
    join Numbers v on v.number between 0 and datediff(day, bondindate, disgorgedate)
    The first line shows the date that will be charged. The join is particularly elegant as it filters or restricts the records within the numbers table stored as a common table expression to multiple records of an individual cask based on all dates between two field values. The starting date identifies where to explode from and the second identifies the ending date (in this case the disgorge date) identifies where it must end. Another example if you were wanting to bring it up to todays date or the day before today , if you have an overnight process running at midnight getdate()-1.

    The cross join in the common table expression definition is used to create a cartesian join on itself, from my investigations there is no particular significance to the sys.columns table used other than this is a system table that holds all columns in all tables in the database and therefore is likely to be somewhat large. It so happens in the database that I am using there are a lot of tables and a lot of columns so a single self cross join creates more than enough numbers than I will ever need. If you wish to create further records you can further cross join the sys.columns table on itself by repeating cross join line with subsequent aliases eg (cross join sys.columns c). After experimentation I discovered for instance if I left out the cross join there were only some 4,000 records in my numbers table. This is suspect reflects the fact that I have 4,000 columns in the tables of my database. In my case 4,000 X 4,000 or the cartesian product of the sys.columns table on itself results in 16 million records more than enough numbers for me. It should be noted that I am restricting the CTE numbers table to the top 20,000 anyway which in our case relates to over 54 years which is more back dating than I think I will ever need. It should be noted the original code I discovered had three cross joins presumably because the number of tables and so columns in the database was much smaller.

    ;WITH Numbers(number) AS (
      select top(20000) row_number() over (order by (select 0))-1
      from sys.columns a
      cross join sys.columns b
    )
    
    INSERT into t066tempexpand
    (chargedate,
    accountcode,
    pkidt002,
    pkidt053dailystoragecharge,
    locationwarehouse)
    select 
    dateadd(day,v.number,d.bondindate) 
    chargedate,
    accountcode, 
    pkid,
    pkidt053dailystoragecharge,
    locationwarehouse
      from t002caskfill d
      join Numbers v on v.number between 0 and datediff(day, bondindate, disgorgedate)
      where pkid > 2000
    

    Here I am joining with an extra table to find out the daily storage charge and this is just a test.

    select
    a.chargedate,
    a.accountcode,
    a.pkidt002,
    b.dailycharge,
    a.pkidt053dailystoragecharge,
    a.locationwarehouse,
    b.chargecode
    from dbo.t0000temp a
    join dbo.v128dailycaskstoragelk023 b on a.pkidt053dailystoragecharge=b.pkid 
    order by pkidt002,chargedate;
    

    And lastly I insert all this information into a Charge history table.

    INSERT INTO dbo.t0032chargehistory
    (chargedate,
    accountcode,
    pkidt002cask,
    chargedamount,
    pkidt053chargecode,
    locationwarehouse,
    chargecode)
    select
    a.chargedate,
    a.accountcode,
    a.pkidt002,
    b.dailycharge,
    a.pkidt053dailystoragecharge,
    a.locationwarehouse,
    b.chargecode
    from dbo.t066tempexpand a
    join dbo.v128dailycaskstoragelk023 b on a.pkidt053dailystoragecharge=b.pkid 
    order by pkidt002,chargedate;
    

    SQL Azure – Create a Function that uses multiple If Else statements

    Example create function code / SQL Azure / that uses the IF ELSE statement to calculate dates between three parameters(referenced to fields) to determine an integer difference between dates – in this case whisky in a bond we are calculating number of days and the parameters are – date cask arrived in bond, date cask out of the bond and date cask is disgorged.
    In English I am calculating number of days in a bond from the bond in date to todays date if there is no bond out date or disgorge date. If either of those dates exist the number of days in bond is the bond in date to the earlier of either of those dates.

    I add in an extra day so that if a cask comes in on Tuesday and goes out the following days the number of days is counted as 2.

    Clearly this is a nice structure that can be used in multiple contexts with different factors. I use the most common option in the first switch in the thought that this will speed things up on the server.

    CREATE FUNCTION [dbo].[calculatedaysinbondcorrect]
    (
    @datebondin Date,
    @datebondout Date,
    @datedisgorge Date
    )
    RETURNS integer
    as
    BEGIN
    DECLARE @date1 DATE, @date2 DATE, @date3 DATE;
    DECLARE @daysinbond INT;
    
    SET @date1 = DATEADD(DAY,-1,@datebondin)
    SET @date2 = @datedisgorge
    SET @date3 = @datebondout
    
    IF ((@date2 is null) and (@date3 is null))
    	BEGIN 
    		SET @date2 = getdate()
    	END
    ELSE If ((@date3 is null) and (@date2 is not null))
    	Begin
    		Set @date2 = @datedisgorge
    	END
    ELSE IF  ((@date3 is not null) and (@date2 is null))
        BEGIN
    		SET @date2 = @datebondout
    	END	
    ELSE IF (@date2 > @date3)
    	BEGIN 
    		SET @date2 = @datebondout
    	END
    ELSE IF (@date2 < @date3)
    	BEGIN 
    		SET @date2 = @datedisgorge
    	END
    ELSE IF (@date2 = @date3)
    	BEGIN 
    		SET @date2 = @datedisgorge
    	END
    
    SELECT @daysinbond = DATEDIFF(dd, @date1, @date2)
    
    RETURN @daysinbond;
    END
    

    SQL Azure – Script to identify all Users in a SQL Azure DATABASE and how to remove Users

    SELECT name AS username, 
         create_date, modify_date, 
         type_desc AS type, 
         authentication_type_DESC AS authentication_type 
    FROM sys.database_principals 
    WHERE type NOT IN ('A', 'G', 'R', 'X') 
         AND
              sid IS NOT null ORDER BY username;

    And to remove a user one of the easiest ways both to see all the users and remove any of the individual users. After connecting to your Azure Database within the database in question identify the Security Branch and then the Users branch – right click on the user you wish to remove and select DELETE.

    This follows on from the post whereby we were trying to creat a user see here
    Creating Contained Users

    SQL Azure – Azure Automation Hybrid Worker Setup and RunAs Account

    UPDATE January 2024 – For SQL Azure you are better using Logic Apps and whitelisting the IP records of the outbound Logic Apps IPs which are specific to the location you are running you are hosting your logic app
    See Link for Logic App setup example
    and
    Azure Logic App IP addresses to White List

    My old notes…
    The highs and lows.

    You’re flying high – you’ve created a stored procedure that you’ve managed to reference in your automation account within a powershell runbook and you’ve got it nicely churning over on an extremely regular basis. You want to run it daily or weekly but as a test you run it hourly just to ensure that it works before releasing it to production. A day passes ok and you think wow this is really great. Then you forget about it and come back to it a week later. You find that somehow it stopped working at 1 am on Tuesday.

    What happened????

    Sorry you got hit by a block resulting from dynamic IP change on the runbook in the Automation Account – DAMN.
    (there is probably a good reason for having a dynamic IP on automation accounts but damn it was really going too well)

    So after quite a lot of investigation I found those in the know seemed to be saying the solution is.
    Azure Automation linked to a Hybrid Worker

    I kept investigating this and I kept getting directed to Microsoft documentation which although really extensive was just not helping me really get my head around things. It wasn’t until I found Travis Roberts that things started to click… I link to three of his videos below.

    So the downsides of Hybrid Worker setup..

  • Despite being the recommended solution configuration is complicated.
  • You need to create a VM with a static IP just to run the hybrid worker (and then grant firewall access to the databaseserver for that VM)
  • That VM needs to have 2 cores and 4Gb of Ram (yes unfortunately not the smallest VM so you could be looking at $25 a month charge)
  • You need to set up log analytics
  • It is recommended that log analytics is in the SAME Azure location as the sample database BUT not all Azure locations have log analytics set up… Use UK South! Hands up who has already set up their server and just realised it is in a location that doesn’t allow log analytics.
  • So I was really struggling with this after my runbook unexpectedly started failing. After several days of reading and searching I found Travis Roberts. Hallelujah for MVPs.

    So I haven’t done it yet but I think the procedure can all be defined in here.

    1. Setting up an Azure VM with a fixed IP (It can be any VM but Azure is good for me)
    Setup VM with static IP
    The server you setup will need to have internet connectivity with SSL outbound on port 443

    2. Setting up Azure Log Analytics a step by step guide
    Setting up Azure Log Analytics a step by step guide

    3. Azure Automation Setup and Linking Log Analytics to your Automation account
    Set up Azure Automation and link it to Log Analytics

    4. Configuring an Azure Automation Hybrid Runbook Worker to run your runbook.
    Installation of Hybrid Runbook Worker
    Includes information about Runbook Worker Group and
    RunAsAccount
    You add credentials to the Runbook Worker Group

    Firstly a big thank you to Travis Roberts for putting out those videos he has loads of Azure Centric content on his youtube channel which you can find here

    CAVEAT I haven’t implemented this myself properly and at the time of writing(December 2021) those videos are a couple of years old now. At the start of one of the Videos Travis indicates that things are changing quickly in the space and expect that there has been changes since these videos. Lets face it this should really be easier to do Azure is just frustrating their users with this complicated configuration.

    SQL Azure – Using Azure Automation and Runbooks to Schedule an Azure SQL Stored Procedure

    UPDATE January 2024 – I have moved away from Runbooks for scheduling Azure SQL stored procedures towards using logic apps but keep this here as its always good to have alternative methods of completing the same task. Link to Logic Apps Setup

    Original Post
    Its vital to be able to run stored procedures on a schedule it can be extremely important for doing daily housekeeping.

    This site was where I learned how to do this.
    How to Schedule an Azure SQL Database’s Stored Procedure

    Runbooks sit in Azure Automation and a Runbook consists of a script that performs a task.

    Runbooks can be written in Python or Powershell. See the last part of this post for an important additional issue.

    "Data Source=01server.database.windows.net;Initial Catalog=database;Integrated Security=False;User ID=01userid;Password=JK12jl423;Connect Timeout=60;Encrypt=False;TrustServerCertificate=False"
    
    # Instantiate the connection to the SQL Database
    $sqlConnection = new-object System.Data.SqlClient.SqlConnection
    $sqlConnection.ConnectionString = "Data Source=servername.database.windows.net;Initial Catalog=database;Integrated Security=False;User ID=userid;Password=password;Connect Timeout=60;Encrypt=False;TrustServerCertificate=False"
    
    $sqlConnection.Open()
    Write-Output "Azure SQL database connection opened"
    
    # Define the SQL command to run
    $sqlCommand = new-object System.Data.SqlClient.SqlCommand
    $sqlCommand.CommandTimeout = 120
    $sqlCommand.Connection = $sqlConnection
    
    Write-Output "Issuing command to run stored procedure"
    
    # Execute the SQL command
    $sqlCommand.CommandText= 'exec [dbo].[storedProcedure]'
    $result = $sqlCommand.ExecuteNonQuery()
    Write-Output "Stored procedure execution completed"
    
    # Close the SQL connection
    $sqlConnection.Close()
    
    Write-Output "Run completed"
    

    Honestly the hardest part is getting the code setting up the scheduling after that is relatively easy.

    WARNING : Runbooks have dynamic IPs so although you can set grant access to an IP your runbook will only run for a certain period before it stops working unless you either have a Hybrid Runbook Worker setup OR you allow Azure services and resources to access this server (which isn’t recommended) The next post I will do is my initial notes on how to configure a Hybrid Runbook Worker.