Long Passwords : Evidence showing why long passwords are a simple security principle

Creating long passwords is one of the easiest ways of improving the security of your applications and resources – here’s a graphic that estimates time it might take to brute force a password depending on the complexity of the character set and its length.

Secure Password Generator

Last Pass Password Generator

Norton Password Generator

1Password – random password generator

bitwarden random password generator

AVAST Password Generatotr

MariaDB – Add calculated field that shows day name

ALTER TABLE tablename 
ADD COLUMN dayname VARCHAR(20) GENERATED ALWAYS AS (DAYNAME(startdate)) 
STORED;

This query will add a new column called dayname to the tablename table. The column will be of type VARCHAR with a maximum length of 20 characters. The GENERATED ALWAYS clause tells MariaDB to calculate the value of the column each time a row is inserted or updated. The AS keyword specifies the expression used to calculate the value, in this case the DAYNAME function applied to the startdate column. Finally, the STORED keyword tells MariaDB to store the calculated value in the table so that it can be retrieved more efficiently.

Note that the GENERATED ALWAYS and STORED clauses require MariaDB version 5.7.6 or later. If you are using an earlier version of MariaDB, you can still add a calculated field using a trigger or a view.

MariaDB – Create query that matches value to a range in another table

For example lets say we were wanting to allocate transactions to a financial year but allow the users to adjust that financial year depending on their circumstances. How could we do that.

Here we have a table called
t0165financeyear
whose structure is

pkid
taxyear
startdate
enddate

And a table of transactions called t023finance

CREATE VIEW v0004financeyear
AS SELECT a.pkid pkidt0023,
a.banked,
a.transactiondate,
a.description,
a.category,
a.Direction,
a.hmrcint,
a.pkidt0001,
a.pkidt0007,
a.vamount,
a.invoiceno,
a.pkidt0011,
a.dupdated,
a.dcreated,
b.pkid pkidt00165,
b.taxyear from t0023finance a, t00165taxyears b
where
a.banked between b.startdate and b.enddate

Remarkably simple – I suspect that we might get cartesian join issues if your start date and end dates in the t00165 table overlap.

MS SQL Azure – TSQL to identify all Triggers in a database / Drop Trigger TSQL syntax

I had been told that it was better not to use Triggers..

Nonetheless I had to learn the hard way.

So fast forward a period and I’m wanting to remove triggers from a database and I first need to find where I had put them in the first instance.

SELECT 
     sysobjects.name AS trigger_name 
    ,USER_NAME(sysobjects.uid) AS trigger_owner 
    ,s.name AS table_schema 
    ,OBJECT_NAME(parent_obj) AS table_name 
    ,OBJECTPROPERTY( id, 'ExecIsUpdateTrigger') AS isupdate 
    ,OBJECTPROPERTY( id, 'ExecIsDeleteTrigger') AS isdelete 
    ,OBJECTPROPERTY( id, 'ExecIsInsertTrigger') AS isinsert 
    ,OBJECTPROPERTY( id, 'ExecIsAfterTrigger') AS isafter 
    ,OBJECTPROPERTY( id, 'ExecIsInsteadOfTrigger') AS isinsteadof 
    ,OBJECTPROPERTY(id, 'ExecIsTriggerDisabled') AS [disabled] 
FROM sysobjects 
/*
INNER JOIN sysusers 
    ON sysobjects.uid = sysusers.uid 
*/  
INNER JOIN sys.tables t 
    ON sysobjects.parent_obj = t.object_id 

INNER JOIN sys.schemas s 
    ON t.schema_id = s.schema_id 
WHERE sysobjects.type = 'TR'

And running this against your database you should see something like this.

And how you drop the triggers

IF OBJECT_ID ('sampledelete2chargehistory', 'TR') IS NOT NULL  
   DROP TRIGGER sampledelete2chargehistory;

Thanks to Stack Overflow – I still get some answers from it over AI

Stackoverflow

SQL Server Express – starting to work with Geometry Functions in SQL Server

Firstly I created a database.

This is essentially me trying to implement in SQL Server what I had written about in Postgres here – Using ST_Within in Postgres

Now I create the starting tables.

CREATE TABLE [dbo].[t00001fields](
	[pkid] [int] IDENTITY(1,1) NOT NULL,
	[fieldname] [varchar](50) NULL,
	[geom] [geometry] NULL

Then

CREATE TABLE [dbo].[t00002plots](
	[pkid] [int] IDENTITY(1,1) NOT NULL,
	[plotname] [varchar](50) NULL,
	[geom] [geometry] NULL

Now a magic junction table that will identify what plot is within which field when it is run

CREATE VIEW v001FieldPlotJunction AS
SELECT t00002plots.pkid as Plotspkid,t00001fields.pkid as Fieldspkid
FROM
t00002plots,
t00001fields
WHERE
t00002plots.GEOM.STWithin(t00001fields.GEOM) = 1;

and here’s an example of it running on local in SQL Express

And this is what I am seeing when viewed through QGIS = many of the fields are smaller than the plots and so although 1 2 look like they should be in the view because the plot is not WITHIN the field it does not show which is the correct response in this case.

Batch File to split CSV into smaller parts

So I am using CHATGPT regularly now and this is a good example of where it can prove very useful. The bottom bat iteration was obtained from StackOverflow and successfully split a large file into chunks of 2,500. I then realised that each sub file needed to have headers in it to assist in importing the information into a database. I asked CHATGPT to alter the starting file to ensure that headers were included subsequently.

It did it no problem..

@echo off

setlocal ENABLEDELAYEDEXPANSION
REM Edit this value to change the name of the file that needs splitting. Include the extension.

SET BFN=C:\csv\target.csv

REM Edit this value to change the number of lines per file.
SET LPF=2500

REM Edit this value to change the name of each short file. It will be followed by a number indicating where it is in the list.
SET SFN=SplitFile

REM Do not change beyond this line.

SET SFX=%BFN:~-3%
SET /A LineNum=0
SET /A FileNum=1
set "header="
for /f "tokens=* usebackq" %%a in ("%BFN%") do (

if not defined header (
set "header=%%a"
echo !header! > %SFN%!FileNum!.%SFX%
) else (
SET /A LineNum+=1
echo %%a >> %SFN%!FileNum!.%SFX%
if !LineNum! EQU !LPF! (
SET /A LineNum=0
SET /A FileNum+=1
echo !header! > %SFN%!FileNum!.%SFX%
)
)

)

endlocal

Pause

This is the original obtained from Stack Overflow..

@echo off

setlocal ENABLEDELAYEDEXPANSION

REM Edit this value to change the name of the file that needs splitting. Include the extension.
SET BFN=C:\csv\target.csv

REM Edit this value to change the number of lines per file.
SET LPF=2500

REM Edit this value to change the name of each short file. It will be followed by a number indicating where it is in the list.
SET SFN=SplitFile

REM Do not change beyond this line.

SET SFX=%BFN:~-3%
SET /A LineNum=0
SET /A FileNum=1
For /F "delims==" %%l in (%BFN%) Do (
SET /A LineNum+=1
echo %%l >> %SFN%!FileNum!.%SFX%
if !LineNum! EQU !LPF! (
SET /A LineNum=0
SET /A FileNum+=1

)

)

endlocal

Pause

MS Azure – SQL Azure – Reliability update 001 (38 days) – using Azure Logic App to execute a Stored Procedure

So a month ago in an non critical application I set up a trigger for a stored procedure to copy records (about 3000) from one table to another 6 times a day.

Approximately 38 days in and I have 100% reliability.

I have firewalled the required IP addresses listed by Microsoft
Azure Logic Apps list of IPs for Firewalls

I will keep running this for testing purposes to identify the reliability and report back

MS Azure – TSQL a simple example using a Common Table Expression (CTE)

A common table expression (CTE) is a temporary result set that you can reference within a SELECT, INSERT, UPDATE, DELETE, or CREATE VIEW statement. A CTE is similar to a derived table in that it is not stored as an object and exists only for the duration of the query.

I’m finding them useful as you can use them within a view to contain all the references required for that particular view. This I feel makes things more maintainable.

An example of a simple common table expression (CTE) in T-SQL:

WITH CTE_EmployeeSalaries AS
(
    SELECT EmployeeID, Salary
    FROM Employees
    WHERE Salary > 50000
)
SELECT *
FROM CTE_EmployeeSalaries;

This CTE defines a virtual table named CTE_EmployeeSalaries that contains all employees with a salary greater than 50,000. The CTE is then used to select all rows from this virtual table.

Common table expressions (CTEs) have several advantages:

1.Improved readability: can make complex queries easier to read and understand by breaking them up into smaller, more manageable pieces.
2.Improved maintainability: Because CTEs are self-contained, they can be easily modified or replaced without affecting the rest of the query.
3.Improved performance: In some cases can improve query performance by reducing the need for subqueries.
4.Recursive queries: CTEs can be used to perform recursive queries, which are useful for querying hierarchical data such as an organizational chart or a bill of materials.
5.Temporary results: can be used to store intermediate results that are used multiple times within a single query. This can be more efficient than repeatedly calculating the same intermediate results.
6.Better alternative to view: CTEs can be used as an alternative to views, which can be difficult to modify and are not always optimized for performance.
7.Reusable: CTEs can be defined once and used multiple times within a single query or in multiple queries within the same session.

MS Azure TSQL writing comments

In TSQL, comments can be added to code in two ways:

1.Single-line comments: These start with two hyphens (–) and continue until the end of the line. For example:

-- This is a single-line comment

2.Multi-line comments: These start with a forward slash and an asterisk (/) and continue until the closing asterisk and forward slash (/). For example:

/*
This is a multi-line comment. It can span
multiple lines and is often used to provide
detailed explanations or to comment out large
blocks of code.
*/

When writing comments, it’s important to keep them clear and concise, and to use them to explain why something is being done rather than how it is being done. This helps to make your code more readable and easier to understand for other developers who may be reading it.

SQL Azure – Using Azure Logic Apps to execute a Stored Procedure in SQL Azure

It is important in a lot of systems to be able to run queries on a regular schedule. Previously I have outlined that this can be done using an automation account. It is also possible to accomplish this using an Azure Logic App to call a Stored Procedure.

The procedure is
Create Logic App
Add items to it

Firstly using SSMS connect to the database and create an example stored procedure.

CREATE PROCEDURE [dbo].[TestInsert]
AS
BEGIN
SET NOCOUNT ON

INSERT INTO dbo.T0003NLPRSitesTarget
(sitename)
SELECT Sitename from dbo.T0003NLPRSites
END
GO

Next log into your Azure Portal and identify the following icon

Select it and then hit the Add button..

You will be presented with the following screen

Now you should be taken to an overview showing that the logic app has been created

Hit Go to resource

You should be presented with the following screen

Here we choose Recurrence

You will be presented with the following Logic Apps Designer

Here is an example
Interval is like how many times the logic app will run over what Frequency
So if we put 3 in Interval and Frequency Day it would run once every 3 days
At these hours – sets specific times when the logic app will run within the set Frequency
On these days – allows for you to set what days the logic app will run At these hours

To explain the logic app running work from the bottom up. So the below example

On Monday, Tuesday , Wednesday, Thursday and Friday at 9,10,11,12,13,14,15,16 and 17 hours

Every week this will run (If Interval had been set to 3 it would have been every three weeks on Monday,Tuesday,Wednesday, Thursday and Friday at the hours 9,10,11,12,13,14,15,16 and 17 hours it would run.

Now hit next step and then type in execute stored procedure

You will first need to set up the connection to the database and then enter the parameters of the server database name database username and password or your chosen security to connect to the database and then you will be taken to the appropriate item.

You will need to set up the firewall on the server to ensure otherwise the logic app is likely to stop working at somepoint..

You can get a list of the appropriate firewall rules to set up by going here..

Managed connectors outbound IP addresses

So after further testing with the following parameters I discovered that in practice SQL Azure is a bit loose about the hours recurrence option. For example the following

Has been runnnig at 43minutes past the hour. Which is indicative of when I hit a run trigger at..

I will be checking to see when it next runs but I suspect it is going to be 17:43

SQL Azure (Basic notes on backup operation)

If you are running things on the cloud you always want to be thinking about backups and ensure that you are really aware of what is happening with backups and how to restore from backups. You should regularly check what is happening with backups and regularly practice restoring databases.

Basic / Standard and Premium SQL Azure has two basic options for restore.

Point in Time Restore and
Long Term Retention

They perform the same task – they collect old databases that can be used to restore to a server if need be but what varies in the UI by which you restore them.

First point – Long Term Retention Policy is attached to the server and you go into it by first navigating to your server and then looking to the left hand section marked as Data management.

Select Backups and you will be presented with a list of all the long term backups of the databases on that server. The following are a couple of basic dtu databases and what can bee seen on the fictional server 01SeverName

In the above table you can see that Long Term Retention policy has NOT been set. How do you change this.

Firstly ensure that you are on the retention policies tab and then tick the particular database you would like to set the retention policy for in the lower section this will allow you to Configure a policy for that particular database.

Here are the options for a BASIC DTU database

Compare this with the options for a database in the Standard Tier

Set it up and then hit apply and you should be good to go.

MS SQL Azure – CREATE USER with read only access to 2 tables

In a lot of situations you may need a satellite website feeding off the main website with greatly reduced access.

It is a good idea to only give that user access to only what they need and limit it down to the bare minimum.
Here I have an azure database called LandRegister and I connect to it in the usual way through SSMS.

Use LandRegister

USE LandRegister
GO

CREATE user TestUserRestricted with password = 'Password01';

Next you can go to the security section after a refresh to see if the user has been added.

And then grant access to the tables you wish them to see.

USE LandRegister
GO

GRANT SELECT on dbo.T0003NLPRSites to TestUserRestricted
GRANT SELECT on dbo.T0024Authority to TestUserRestricted;

TestUserRestricted can now be used by your web application without any fear that
1) A USER cannot login to a server through SQL Server Management Studio
2) That they can see anything except the tables listed.
3) That in this case they can do anything except select statements on the stated tables. (So no write delete or updates on table)

Here I give full access to the table

USE LandRegister
GO

GRANT INSERT,SELECT,DELETE,UPDATE ON 
dbo.T039Country to TestUserRestricted;

And it works with views as well

Use LandRegister
GO

GRANT SELECT on dbo.v032 to TestUserRestricted;

MS Azure & Web Applications – TSQL Create a check constraint on nvarchar column specifying a minimum length in DB and through the front end ensure enforce OWASP security

Double security!
Enforce length in database and then through the front end by validating in this case password values (PHPRunner example).

First the backend
You have a nvarchar(128) column in MS Azure SQL Database (or any Microsoft SQL Server database) and you wish to add a check constraint that does not allow a value in the column to be less than lets say 15 characters long.

tablename = users
field = userpassword

ALTER TABLE dbo.users
ADD CONSTRAINT [MinLengthConstraint] 
CHECK (DATALENGTH([userpassword]))>14;

And then a good way to check the constraints on an individual table using TSQL through SSMS for MS AZURE.

SELECT TABLE_NAME, 
CONSTRAINT_TYPE, 
CONSTRAINT_NAME 
FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS 
WHERE TABLE_NAME ='users';

Front End
Now for double security in your web application use a regular expression to prevent users entering a weak password.

^(?=.*\d)(?=.*[a-z])(?=.*[A-Z]).{15,128}$
^               # start of the string
(?=.*\d)        # a digit must occur at least once
(?=.*[a-z])     # a lower case letter must occur at least once
(?=.*[A-Z])     # an upper case letter must occur at least once
.{15,128}       # 15-128 character password must be a minimum of 15 characters and a maximum of 128
$               # end of the string

OWASP – Open Web Application Security Project

cPanel – take a manual backup of a MariaDB / MySQL database

There are no shortages of articles on how to take a backup of a MySQL or MariaDB database from cPanel however given how important it is I like to write these things down showing how I accomplished this for my own reference. Its extremely easy to do a download which means that you should not have any opportunity or reason not to do it regularly if you have an important database that for instance is part of a web application.

1. Enter your given cPanel management portal.

2.Find the database section and select the phpMyAdmin icon.

3.Select the database you are interested in from the lefthandside.

4.Click export in the menu section and then its just a case of clicking go. An SQL will be downloaded to the download directory (on a windows machine). And keep this and you can run this to create a new database.

And its very easy to go in and check if its ok you can use any good ide or you could change the suffix to TXT and then just look at it in word.
Happy backing up.!!!

MS Azure – Web App Service – Notes on creating a staging web front end on an existing Web App Service

At some point in developing sites for clients you will may want a satellite site feeding off the main database but with prototype features on it. Considering that certain levels of web app service come with free staging sites you may as well use these rather than either the free service which may be less well provisioned or a separate paid site that you will have to pay additional money for. The number of slots available will depend on the service you are on.

Beginners guide to setting up a staging slot

Once you have the site up and running you operate it in the same way as a normal service and so you can get the publishing details from the Get Publish Profile.

SQL Azure – Creating a numbers table using a Cartesian Product and then expanding dates between two field values for individual records

The following code uses a Common Table Expression to create a numbers table ,which is then used to expand all dates between two date values of an individual record. This structure could be used in a variety of circumstances where you are needing to charge costs to a product over a period of time or if you are needing to record consecutive dated values between two dates.

I am using Whisky Casks as an example here. Imagine a cask came in on a day and needs to be matured for a period and there will be a charge for every day that the cask is being stored to an account. Clearly going forward you might want to set up an overnight process to create the daily charges. But what happens where you wish to retrospectively create charges. Charge records will build up quickly and it will be difficult to manually keep on top of the required record correction.

Firstly I create a table that will hold a single record for each date a cask is in bond between the two tables. There are additional columns here as I need them for my use case (casks in bond).

SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

CREATE TABLE [dbo].[t066tempexpand](
	[pkid] [int] IDENTITY(1,1) NOT NULL,
	[chargedate] [datetime] NULL,
	[accountcode] [nvarchar](16) NULL,
	[pkidt002] [int] NULL,
	[pkidt053dailystoragecharge] [int] NULL,
	[locationwarehouse] [nvarchar](50) NULL,
 CONSTRAINT [PK_0000temp] PRIMARY KEY CLUSTERED 
(
	[pkid] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO

Next we create a CTE numbers table and use this to explode a single record for each dated day between the two dates in this case bondindate and a disgorge date. I have an extra filter here that filters what casks are being put into the table (here pkid>2000). Other filters could be where disgorgedate is not null. The following line will be used to identify the charge date.
dateadd(day,v.number,d.bondindate)
and
join Numbers v on v.number between 0 and datediff(day, bondindate, disgorgedate)
The first line shows the date that will be charged. The join is particularly elegant as it filters or restricts the records within the numbers table stored as a common table expression to multiple records of an individual cask based on all dates between two field values. The starting date identifies where to explode from and the second identifies the ending date (in this case the disgorge date) identifies where it must end. Another example if you were wanting to bring it up to todays date or the day before today , if you have an overnight process running at midnight getdate()-1.

The cross join in the common table expression definition is used to create a cartesian join on itself, from my investigations there is no particular significance to the sys.columns table used other than this is a system table that holds all columns in all tables in the database and therefore is likely to be somewhat large. It so happens in the database that I am using there are a lot of tables and a lot of columns so a single self cross join creates more than enough numbers than I will ever need. If you wish to create further records you can further cross join the sys.columns table on itself by repeating cross join line with subsequent aliases eg (cross join sys.columns c). After experimentation I discovered for instance if I left out the cross join there were only some 4,000 records in my numbers table. This is suspect reflects the fact that I have 4,000 columns in the tables of my database. In my case 4,000 X 4,000 or the cartesian product of the sys.columns table on itself results in 16 million records more than enough numbers for me. It should be noted that I am restricting the CTE numbers table to the top 20,000 anyway which in our case relates to over 54 years which is more back dating than I think I will ever need. It should be noted the original code I discovered had three cross joins presumably because the number of tables and so columns in the database was much smaller.

;WITH Numbers(number) AS (
  select top(20000) row_number() over (order by (select 0))-1
  from sys.columns a
  cross join sys.columns b
)

INSERT into t066tempexpand
(chargedate,
accountcode,
pkidt002,
pkidt053dailystoragecharge,
locationwarehouse)
select 
dateadd(day,v.number,d.bondindate) 
chargedate,
accountcode, 
pkid,
pkidt053dailystoragecharge,
locationwarehouse
  from t002caskfill d
  join Numbers v on v.number between 0 and datediff(day, bondindate, disgorgedate)
  where pkid > 2000

Here I am joining with an extra table to find out the daily storage charge and this is just a test.

select
a.chargedate,
a.accountcode,
a.pkidt002,
b.dailycharge,
a.pkidt053dailystoragecharge,
a.locationwarehouse,
b.chargecode
from dbo.t0000temp a
join dbo.v128dailycaskstoragelk023 b on a.pkidt053dailystoragecharge=b.pkid 
order by pkidt002,chargedate;

And lastly I insert all this information into a Charge history table.

INSERT INTO dbo.t0032chargehistory
(chargedate,
accountcode,
pkidt002cask,
chargedamount,
pkidt053chargecode,
locationwarehouse,
chargecode)
select
a.chargedate,
a.accountcode,
a.pkidt002,
b.dailycharge,
a.pkidt053dailystoragecharge,
a.locationwarehouse,
b.chargecode
from dbo.t066tempexpand a
join dbo.v128dailycaskstoragelk023 b on a.pkidt053dailystoragecharge=b.pkid 
order by pkidt002,chargedate;

SQL Azure – Create a Function that uses multiple If Else statements

Example create function code / SQL Azure / that uses the IF ELSE statement to calculate dates between three parameters(referenced to fields) to determine an integer difference between dates – in this case whisky in a bond we are calculating number of days and the parameters are – date cask arrived in bond, date cask out of the bond and date cask is disgorged.
In English I am calculating number of days in a bond from the bond in date to todays date if there is no bond out date or disgorge date. If either of those dates exist the number of days in bond is the bond in date to the earlier of either of those dates.

I add in an extra day so that if a cask comes in on Tuesday and goes out the following days the number of days is counted as 2.

Clearly this is a nice structure that can be used in multiple contexts with different factors. I use the most common option in the first switch in the thought that this will speed things up on the server.

CREATE FUNCTION [dbo].[calculatedaysinbondcorrect]
(
@datebondin Date,
@datebondout Date,
@datedisgorge Date
)
RETURNS integer
as
BEGIN
DECLARE @date1 DATE, @date2 DATE, @date3 DATE;
DECLARE @daysinbond INT;

SET @date1 = DATEADD(DAY,-1,@datebondin)
SET @date2 = @datedisgorge
SET @date3 = @datebondout

IF ((@date2 is null) and (@date3 is null))
	BEGIN 
		SET @date2 = getdate()
	END
ELSE If ((@date3 is null) and (@date2 is not null))
	Begin
		Set @date2 = @datedisgorge
	END
ELSE IF  ((@date3 is not null) and (@date2 is null))
    BEGIN
		SET @date2 = @datebondout
	END	
ELSE IF (@date2 > @date3)
	BEGIN 
		SET @date2 = @datebondout
	END
ELSE IF (@date2 < @date3)
	BEGIN 
		SET @date2 = @datedisgorge
	END
ELSE IF (@date2 = @date3)
	BEGIN 
		SET @date2 = @datedisgorge
	END

SELECT @daysinbond = DATEDIFF(dd, @date1, @date2)

RETURN @daysinbond;
END

Favicon Creation a suggested method

Icons are not just about branding.

They also assist users in distinguishing your application from other applications that may be open on the desktop. Now in the past the configuration of these icons seems to have been a bit flaky with different browsers requiring different formats. After quite a bit of research I have managed to settle on a procedure which will get you good icons for your php applications.

Firstly identify a picture that you wish to make into an icon.

Next get as good a quality 256 by 256 png of the icon you wish to display ensuring that if you require transparent background you include it into the png.

Next go into Inkscape and make 2 exports
1) as a 256 by 256 icon. This will be for desktop windows computers.
2) as a 57 by 57 icon titled apple-touch-icon.png

Then go to an icon converter site

The one I used to go to doesn’t seem to exist anymore but the following site seems to do a good job.

RedKetchup Icon Converter

There should be a step procedure here go through each step.

Step 1 : Choose the png file that you exported from Inkscape
Step 2 : Choose your preferred shape I normally just choose the square
Step 3 : It is important here to choose the multi size in one icon option as follows.

Note if you select multiple image shapes you will get multiple icon sizes. Download the generated ico file and then rename it to favicon.ico.

Place the favicon.ico file and the apple-touch-icon.png file in the root directory of your PHP application and upload to server everything should now work its way to desktop icons.