# TSQL ## Resources ![[2022-08-11-14-37-33.png]] - [MSDN Documentation](https://docs.microsoft.com/en-us/sql/t-sql/language-reference?view=sql-server-ver15) - [Documentation Outline Idea](https://derekhackett.com/t-sql-documentation) - [SQL Red Gate Doc](https://www.red-gate.com/products/sql-development/sql-doc/) - [Elaboration](https://www.red-gate.com/simple-talk/sql/t-sql-programming/documenting-your-sql-server-database/) - [getting sql server and databases setup on mac](https://database.guide/install-sql-server-2019-on-a-mac/) - [SQL Code Formatter](https://www.freeformatter.com/sql-formatter.html#ad-output) - [SQL injection Cheat Sheet](https://www.netsparker.com/blog/web-security/sql-injection-cheat-sheet/) - [Data Warehouse ETL Framework](https://etl-framework.massstreetuniversity.com/data-warehouse-etl-framework/) - [Data Models](http://www.databaseanswers.org/data_models/) - Extended properties - <https://www.sqlservercentral.com/articles/extended-properties-introduction> - <https://www.mssqltips.com/sqlservertip/5384/working-with-sql-server-extended-properties/> ## Data Types | Data Type | Range | Storage | |:-----------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------| | `varchar(x)` | string of `x` number of bytes "variable character" | Variable | | `nvarchar(x)` | string of `x` number of bytes "non-variable character" | User Defined | | `char(x)` | string of `x` number of bytes of characters. Unlike `varchar` if you only use 1/50 chars, the other 49 will be taken up with spaces | User Defined | | `bigint` | -2^63 (-9,223,372,036,854,775,808) to 2^63-1 (9,223,372,036,854,775,807) | 8 Bytes | | `int` | -2^31 (-2,147,483,648) to 2^31-1 (2,147,483,647) | 4 Bytes | | `smallint` | -2^15 (-32,768) to 2^15-1 (32,767) | 2 Bytes | | `tinyint` | 0 to 255 | 1 Byte | | `money` | -922,337,203,685,477.5808 to 922,337,203,685,477.5807 (-922,337,203,685,477.58 to 922,337,203,685,477.58 for Informatica. Informatica only supports two decimals, not four.) | 8 bytes | | `smallmoney` | - 214,748.3648 to 214,748.3647 | 4 bytes | | `date(s)/time(s)` | [see docs](https://docs.microsoft.com/en-us/sql/t-sql/data-types/date-transact-sql?view=sql-server-ver15) | | | `float` | - 1.79E+308 to -2.23E-308, 0 and 2.23E-308 to 1.79E+308 | Depends on the value of n | | `real` | - 3.40E + 38 to -1.18E - 38, 0 and 1.18E - 38 to 3.40E + 38 | 4 Bytes | --- - Reference: - [MS Docs](https://docs.microsoft.com/en-us/sql/t-sql/data-types/data-types-transact-sql?view=sql-server-ver15) ### Dates #### Difference Between datetime2 and datetimeoffset ```sql CREATE TABLE #DateTests (DateTime_2 DATETIME2(7), DateTime_Offset DATETIMEOFFSET(7)); INSERT INTO #DateTests VALUES (GETDATE(), GETDATE()); SELECT * FROM #DateTests; ``` ![[2022-03-14-13-07-29.png]] #### How to add a valid time zone to a datetime value `AT TIME ZONE` will assign a time zone offset to a `datetime`, `smalldatetime`, or `datetime2` value that otherwise would not include one. It works by simply adding the words `AT TIME ZONE` immediately after a `datetime` or `datetime2` value and then listing a valid time zone. This time zone is a `NVARCHAR(256)` type. Since this author is writing this tip from the Eastern Time zone in the US, that will be the time zone selected. The second option is to use the `GETUTCDATE()` function which will return the UTC date and time rather than the system date and time. This will be recorded with the correct time zone (+00:00) and will thus be accurate. ```sql DROP TABLE IF EXISTS #DateTests; CREATE TABLE #DateTests (DateTime_2 DATETIME2(7), DateTime_Offset DATETIMEOFFSET(7)); INSERT INTO #DateTests VALUES (GETDATE(), GETDATE() AT TIME ZONE N'US Eastern Standard Time'); INSERT INTO #DateTests VALUES (GETDATE(), GETUTCDATE()); SELECT * FROM #DateTests; ``` ![[2022-03-14-13-10-04.png]] #### Get list of valid time zones ```sql SELECT * FROM sys.time_zone_info; ``` #### Get list of your values for each timezone ```sql SELECT DateTime_Offset , tzi.name , DateTime_Offset AT TIME ZONE tzi.name AS ConvertedDateAndTime FROM #DateTests CROSS JOIN sys.time_zone_info tzi ORDER BY tzi.name; ``` #### Date Manipulation ```sql DATEPART ( datepart , date ) ``` | datepart | Abbreviations | | ----------- | ------------- | | year | yy, yyyy | | quarter | qq, q | | month | mm, m | | dayofyear | dy, y | | day | dd, d | | week | wk, ww | | weekday | dw | | hour | hh | | minute | mi, n | | second | ss, s | | millisecond | ms | | microsecond | mcs | | nanosecond | ns | | tzoffset | tz | | iso_week | isowk, isoww | Returns an INT ```sql SELECT DATEPART(datepart,'2007-10-30 12:15:32.1234567 +05:10') ``` | datepart | Return value | | ---------------------- | ------------ | | year, yyyy, yy | 2007 | | quarter, qq, q | 4 | | month, mm, m | 10 | | dayofyear, dy, y | 303 | | day, dd, d | 30 | | week, wk, ww | 44 | | weekday, dw | 3 | | hour, hh | 12 | | minute, n | 15 | | second, ss, s | 32 | | millisecond, ms | 123 | | microsecond, mcs | 123456 | | nanosecond, ns | 123456700 | | tzoffset, tz | 310 | | iso_week, isowk, isoww | 44 | ### JSON #### Data to json for app Instead of selecting data into a data frame or dealing with data in that manner, Just select the data into [[DevLog/Z/JSON]] straight from the database so that you can use the data in the app as pure json as needed and if you need to push data back into the database you could easily read the json into pandas and push that to the SQL db or maybe even just send the json string back! #### Select data into json ```sql SELECT TOP (1000) *   FROM [DataBase].[schema].[table] AS t   FOR JSON PATH, ROOT('RootNode') ``` --- - Reference: - [Reference Docs](https://docs.microsoft.com/en-us/sql/relational-databases/json/format-query-results-as-json-with-for-json-sql-server?view=sql-server-ver15) #### Working with json ![[( 2022-03-16 A Cheat Sheet for Working with Json in Sql Server]] ## Query Order Of Operations ```sql ( 5.1 ) SELECT ( 5.2 ) DISTINCT ( 7 ) TOP ( 1 ) FROM <tables> [JOIN, APPLY, PIVOT, UNPIVOT] ( 2 ) WHERE ( 3 ) GROUP BY ( 4 ) HAVING ( 6 ) ORDER BY ``` 1. `FROM`: The query process starts with the FROM clause. Beginning with the specified source tables, it processes table operators (join, apply, etc.) in written order from left to right. 2. `WHERE`: This phase filters the rows based on the predicate in the WHERE clause. Only rows for which the predicate evaluates to TRUE are returned. 3. `GROUP BY`: This phase arranges the filtered rows in groups based on the set of expressions (aka, grouping set) specified in the GROUP BY clause. There will be one result row per qualifying group 4. `HAVING`: This phase filters the groups on the predicate that appears in the HAVING clause (similar to a WHERE clause but for groups). Only groups for which the predicate evaluates to TRUE are returned. 5. `SELECT` 6. `Evaluate Expressions`: This phase evaluates the expressions in the SELECT list. 7. `DISTINCT`: This phase removes duplicate rows from the SELECT list results. 8. `ORDER BY `: This phase orders the rows according to the list in the ORDER BY clause. 9. `TOP`: This phase filters specified number of rows based on the ordering in the ORDER BY clause, or based on arbitrary order if there is no ORDER BY clause specified. ## Index Statistics ```sql DBCC SHOW_STATISTICS('ticketer.app.metric', 'CIX_Created_Date'); GO ``` --- - Reference: - [[( 2021-12-20 How to Think like the Sql Server Engine]] ## Service Broker ### Reference - <https://www.sqlshack.com/using-the-sql-server-service-broker-for-asynchronous-processing/> - <https://www.sqlservercentral.com/articles/service-broker-part-1-service-broker-basics> - <https://youtu.be/CXPJL1jhipA> ### Setup Enable the service broker on a database ```sql ALTER DATABASE OrderDB SET ENABLE_BROKER; ``` #### Create a Message Type A Message type will define what type of data you are sending. ```sql CREATE MESSAGE TYPE ReceivedOrders AUTHORIZATION dbo VALIDATION = None ``` Validation Optioons: - NONE (The typical default) - EMPTY - WELL_FORMED_XML - VALID_XML (WITH SCHEMA COLLECTION) #### Create a Contract The contract will create a logical grouping of one or more message types. ```sql CREATE CONTRACT postmessages (ReceivedOrders SENT BY ANY) -- allows messages from any endpoint ``` #### Create a Queue A Queue will hold the messages that you are sending. ```sql CREATE QUEUE OrderQueue WITH STATUS = ON, RETENTION = OFF -- Retention to ON will keep the messages in the queue instead of deleting them but this impacts performance ``` #### Create the Service Service will ensure that messages are sending and receiving. ```sql CREATE SERVICE OrderService AUTHORIZATION dbo ON QUEUE OrderQueue (postmessages) ``` ### Using the Service Broker #### Create Orders Table ```sql CREATE TABLE [dbo].[Orders] ( [OrderID] [int] NOT NULL, [OrderDate] [date] NULL, [ProductCode] [varchar](50) NOT NULL, [Quantity] [numeric](9, 2) NULL, [UnitPrice] [numeric](9, 2) NULL, CONSTRAINT [PK__Orders] PRIMARY KEY CLUSTERED ([OrderID] ASC, [ProductCode] ASC) ON [PRIMARY] ) ON [PRIMARY] GO ``` #### Create Stored Proc to Insert Orders and Send Messages To The Queue ```sql CREATE PROCEDURE dbo.usp_CreateOrders ( @OrderID INT ,@ProductCode VARCHAR(50) ,@Quantity NUMERIC(9, 2) ,@UnitPrice NUMERIC(9, 2) ) AS BEGIN DECLARE @OrderDate SMALLDATETIME = GETDATE() DECLARE @XMLMessage XML CREATE TABLE #Message ( OrderID INT PRIMARY KEY , OrderDate DATE , ProductCode VARCHAR(50) , Quantity NUMERIC(9, 2) , UnitPrice NUMERIC(9, 2) ) INSERT INTO #Message ( OrderID , OrderDate , ProductCode , Quantity , UnitPrice ) VALUES ( @OrderID , @OrderDate , @ProductCode , @Quantity , @UnitPrice ) --Insert to Orders Table INSERT INTO Orders ( OrderID , OrderDate , ProductCode , Quantity , UnitPrice ) VALUES ( @OrderID , @OrderDate , @ProductCode , @Quantity , @UnitPrice ) --Creating the XML Message SELECT @XMLMessage = (SELECT * FROM #Message FOR XML PATH('Order'), TYPE); DECLARE @Handle UNIQUEIDENTIFIER; --Sending the Message to the Queue BEGIN DIALOG CONVERSATION @Handle FROM SERVICE OrderService TO SERVICE 'OrderService' ON CONTRACT [postmessages] WITH ENCRYPTION = OFF; SEND ON CONVERSATION @Handle MESSAGE TYPE ReceivedOrders(@XMLMessage); -- SEND command is used to send an XML formatted message to the ReceivedOrders queue. END GO ``` #### Send Sample Data ```sql EXEC dbo.usp_CreateOrders 202003,'PD0001',1,10.50 EXEC dbo.usp_CreateOrders 202003,'PD0002',2,100.75 EXEC dbo.usp_CreateOrders 202003,'PD0010',1.5,20.00 ``` ##### Verify that records are in the correct locations ```sql SELECT * FROM dbo.Orders ------------------------ SELECT service_name , priority , queuing_order , service_contract_name , message_type_name , validation , message_body , message_enqueue_time , status FROM dbo.OrderQueue ``` #### Consume The Queue ```sql DECLARE @Handle UNIQUEIDENTIFIER ; DECLARE @MessageType SYSNAME ; DECLARE @Message XML DECLARE @OrderDate DATE DECLARE @OrderID INT DECLARE @ProductCode VARCHAR(50) DECLARE @Quantity NUMERIC (9, 2) DECLARE @UnitPrice NUMERIC (9, 2) WAITFOR( RECEIVE TOP (1) @Handle = conversation_handle, @MessageType = message_type_name, @Message = message_body FROM dbo.OrderQueue),TIMEOUT 1000 SET @OrderID = CAST(CAST(@Message.query('/Order/OrderID/text()') AS NVARCHAR(MAX)) AS INT) SET @OrderDate = CAST(CAST(@Message.query('/Order/OrderDate/text()') AS NVARCHAR(MAX)) AS DATE) SET @ProductCode = CAST(CAST(@Message.query('/Order/ProductCode/text()') AS NVARCHAR(MAX)) AS VARCHAR(50)) SET @Quantity = CAST(CAST(@Message.query('/Order/Quantity/text()') AS NVARCHAR(MAX)) AS NUMERIC(9,2)) SET @UnitPrice = CAST(CAST(@Message.query('/Order/UnitPrice/text()') AS NVARCHAR(MAX)) AS NUMERIC(9,2)) PRINT @OrderID PRINT @OrderDate PRINT @ProductCode PRINT @Quantity PRINT @UnitPrice ``` The above code can consume the data in the queue and process them. With this approach, asynchronous processing can be done. ### Summary ```sql -- Enable Service Broker USE <Database Name,SYSNAME,OrderDB> GO ALTER DATABASE SET ENABLE_BROKER; GO -- Create Message CREATE MESSAGE TYPE <Message Type,,ReceivedOrders> AUTHORIZATION <Owner, SYSNAME, dbo> VALIDATION = <Validation,,None> GO -- Create Contract CREATE CONTRACT <Contract Name,,postmessages> (<Message Type,SYSNAME,ReceivedOrders> SENT BY <End Points,,ANY>) GO -- Create Queue CREATE QUEUE <Queue Name,,OrderQueue> WITH STATUS = ON, RETENTION = OFF, ACTIVATION ( PROCEDURE_NAME = <Proc to call on new message,SYSNAME,>, -- sproc to run when the queue receives a message MAX_QUEUE_READERS = <Max queue readers,,50>, -- max concurrently executing instances of sproc EXECUTE AS '<Owner, SYSNAME, dbo>' ); GO -- Create Service CREATE SERVICE <Service Name,,OrderService> AUTHORIZATION <Owner, SYSNAME, dbo> ON QUEUE <Queue Name,,OrderQueue> (<Contract Name,,postmessages>) GO ------------------------------------------------------------------------------------------ -- SAMPLE USAGE ------------------------------------------------------------------------------------------ -- Create Table CREATE TABLE dbo.Orders ( OrderID int NOT NULL, OrderDate date NULL, ProductCode varchar(50) NOT NULL, Quantity numeric(9, 2) NULL, UnitPrice numeric(9, 2) NULL, CONSTRAINT PK__Orders PRIMARY KEY CLUSTERED (OrderID ASC, ProductCode ASC) ON [PRIMARY] ) ON [PRIMARY] GO -- Create Data Insertion Proc CREATE PROCEDURE dbo.usp_CreateOrders ( @OrderID INT ,@ProductCode VARCHAR(50) ,@Quantity NUMERIC(9, 2) ,@UnitPrice NUMERIC(9, 2) ) AS BEGIN DECLARE @OrderDate SMALLDATETIME = GETDATE() DECLARE @XMLMessage XML CREATE TABLE #Message ( OrderID INT PRIMARY KEY , OrderDate DATE , ProductCode VARCHAR(50) , Quantity NUMERIC(9, 2) , UnitPrice NUMERIC(9, 2) ) INSERT INTO #Message ( OrderID , OrderDate , ProductCode , Quantity , UnitPrice ) VALUES ( @OrderID , @OrderDate , @ProductCode , @Quantity , @UnitPrice ) --Insert to Orders Table INSERT INTO Orders ( OrderID , OrderDate , ProductCode , Quantity , UnitPrice ) VALUES ( @OrderID , @OrderDate , @ProductCode , @Quantity , @UnitPrice ) --Creating the XML Message SELECT @XMLMessage = (SELECT * FROM #Message FOR XML PATH('Order'), TYPE); DECLARE @Handle UNIQUEIDENTIFIER; --Sending the Message to the Queue BEGIN DIALOG CONVERSATION @Handle FROM SERVICE OrderService TO SERVICE 'OrderService' ON CONTRACT [postmessages] WITH ENCRYPTION = OFF; SEND ON CONVERSATION @Handle MESSAGE TYPE ReceivedOrders(@XMLMessage); -- SEND command is used to send an XML formatted message to the ReceivedOrders queue. END GO -- Insert Sample Data EXEC dbo.usp_CreateOrders 202003,'PD0001',1,10.50 EXEC dbo.usp_CreateOrders 202003,'PD0002',2,100.75 EXEC dbo.usp_CreateOrders 202003,'PD0010',1.5,20.00 -- Verify Records Landed SELECT * FROM dbo.Orders ------------------------ SELECT service_name , priority , queuing_order , service_contract_name , message_type_name , validation , message_body , message_enqueue_time , status FROM dbo.OrderQueue -- Consume The Queue DECLARE @Handle UNIQUEIDENTIFIER ; DECLARE @MessageType SYSNAME ; DECLARE @Message XML DECLARE @OrderDate DATE DECLARE @OrderID INT DECLARE @ProductCode VARCHAR(50) DECLARE @Quantity NUMERIC (9, 2) DECLARE @UnitPrice NUMERIC (9, 2) WAITFOR( RECEIVE TOP (1) @Handle = conversation_handle, @MessageType = message_type_name, @Message = message_body FROM dbo.OrderQueue),TIMEOUT 1000 SET @OrderID = CAST(CAST(@Message.query('/Order/OrderID/text()') AS NVARCHAR(MAX)) AS INT) SET @OrderDate = CAST(CAST(@Message.query('/Order/OrderDate/text()') AS NVARCHAR(MAX)) AS DATE) SET @ProductCode = CAST(CAST(@Message.query('/Order/ProductCode/text()') AS NVARCHAR(MAX)) AS VARCHAR(50)) SET @Quantity = CAST(CAST(@Message.query('/Order/Quantity/text()') AS NVARCHAR(MAX)) AS NUMERIC(9,2)) SET @UnitPrice = CAST(CAST(@Message.query('/Order/UnitPrice/text()') AS NVARCHAR(MAX)) AS NUMERIC(9,2)) PRINT @OrderID PRINT @OrderDate PRINT @ProductCode PRINT @Quantity PRINT @UnitPrice ``` ## Logical Flow ### Try Catch ```sql BEGIN TRY --Process that may create an error END TRY BEGIN CATCH --Process to handle error END CATCH ``` | Function Name | Function Definition | |-------------------|-----------------------------------------------------------------------| | `ERROR_LINE` | The line number the error occurred on. | | `ERROR_MESSAGE` | Plain language description of the error. | | `ERROR_NUMBER` | The number of the error. | | `ERROR_PROCEDURE` | The name of the function or stored procedure that produced the error. | | `ERROR_SEVERITY` | The severity value of the error. | | `ERROR_STATE` | The state number of the error. | ```sql USE demo BEGIN TRY Print 1/0 END TRY BEGIN CATCH PRINT 'Error '+CAST(ERROR_NUMBER()AS NVARCHAR(6))+' '+ERROR_MESSAGE() END CATCH ``` ### loops #### For For loops do not exist in SQL Server so here's how to replicate them with a [[s.q.tsql.flow.loops.while]] loop. ```sql DECLARE @cnt INT = 0; WHILE @cnt < cnt_total BEGIN {...statements...} SET @cnt = @cnt + 1; END; ``` #### While ##### Syntax ```sql WHILE CONDITION BEGIN CODE BREAK --Optional CONTINUE --Optional END ``` ```sql WHILE (SELECT SUM([OrderQty]) FROM #SalesOrderDetail ) < 300000 BEGIN UPDATE #SalesOrderDetail SET [OrderQty] = [OrderQty] + 1000 IF (SELECT MAX(OrderQty) FROM #SalesOrderDetail) > 3000 BREAK ELSE CONTINUE END ``` ```sql DECLARE @counter INT = 0 WHILE ( @counter <= 5 ) BEGIN PRINT 'Too much for the market to bear' PRINT '@counter value is:' + CAST(@counter AS VARCHAR) SET @counter = @counter + 1 END ; ``` ##### While Example ```sql DECLARE @count smallint = 0 WHILE @count<100 BEGIN INSERT INTO #email VALUES(@count,CONCAT('user',FLOOR(RAND()*1000),'@outlook.com')) SET @count=@count+1 END ``` `WHILE` is not as efficient as [[s.q.tsql.dbos.common-table-expressions]] ##### CTE Example ```sql WITH numbergenerator (id, email) AS ( SELECT 1 AS id, CONCAT('user',floor(1000*RAND(CHECKSUM(NEWID()))),'@outlook.com') as email UNION ALL SELECT ng.id + 1 AS id, CONCAT('user',floor(1000*RAND(CHECKSUM(NEWID()))),'@outlook.com') FROM numbergenerator ng WHERE ng.id < 100 ) SELECT * INTO #numbergenerator FROM numbergenerator ng; GO ``` ## Bulk Insert <https://docs.microsoft.com/en-us/sql/t-sql/statements/bulk-insert-transact-sql?view=sql-server-ver15> ```sql BULK INSERT { database_name.schema_name.table_or_view_name | schema_name.table_or_view_name | table_or_view_name } FROM 'data_file' [ WITH ( [ [ , ] BATCHSIZE = batch_size ] [ [ , ] CHECK_CONSTRAINTS ] [ [ , ] CODEPAGE = { 'ACP' | 'OEM' | 'RAW' | 'code_page' } ] [ [ , ] DATAFILETYPE = { 'char' | 'native'| 'widechar' | 'widenative' } ] [ [ , ] DATA_SOURCE = 'data_source_name' ] [ [ , ] ERRORFILE = 'file_name' ] [ [ , ] ERRORFILE_DATA_SOURCE = 'errorfile_data_source_name' ] [ [ , ] FIRSTROW = first_row ] [ [ , ] FIRE_TRIGGERS ] [ [ , ] FORMATFILE_DATA_SOURCE = 'data_source_name' ] [ [ , ] KEEPIDENTITY ] [ [ , ] KEEPNULLS ] [ [ , ] KILOBYTES_PER_BATCH = kilobytes_per_batch ] [ [ , ] LASTROW = last_row ] [ [ , ] MAXERRORS = max_errors ] [ [ , ] ORDER ( { column [ ASC | DESC ] } [ ,...n ] ) ] [ [ , ] ROWS_PER_BATCH = rows_per_batch ] [ [ , ] ROWTERMINATOR = 'row_terminator' ] [ [ , ] TABLOCK ] -- input file format options [ [ , ] FORMAT = 'CSV' ] [ [ , ] FIELDQUOTE = 'quote_characters'] [ [ , ] FORMATFILE = 'format_file_path' ] [ [ , ] FIELDTERMINATOR = 'field_terminator' ] [ [ , ] ROWTERMINATOR = 'row_terminator' ] )] ``` ### Example ```sql BULK INSERT db.schema.table_name FROM 'path/to/file.csv' WITH (FIELDTERMINATOR = '|', ROWTERMINATOR = '\n'); ``` ### Example with dynamic SQL ```sql DECLARE @sql NVARCHAR(MAX) DECLARE @file_path NVARCHAR(255) = 'path/to/file.csv' SET @sql = 'BULK INSERT db.schema.table FROM ''' + @file_path + ''' WITH (FIRSTROW = 2, FIELDTERMINATOR = ''|'', ROWTERMINATOR = ''\n'')' EXEC (@sql) ``` ## Sql Agent Jobs ### Sample Job Code - <https://www.mssqltips.com/sqlservertip/3052/simple-way-to-create-a-sql-server-job-using-tsql/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+MSSQLTips-LatestSqlServerTips+%28MSSQLTips+-+Latest+SQL+Server+Tips%29> ```sql USE msdb go CREATE procedure [dbo].[sp_add_job_quick] @job nvarchar(128), @mycommand nvarchar(max), @servername nvarchar(28), @startdate nvarchar(8), @starttime nvarchar(8) as --Add a job EXEC dbo.sp_add_job @job_name = @job --Add a job step named process step. This step runs the stored procedure EXEC sp_add_jobstep @job_name = @job, @step_name = N'process step', @subsystem = N'TSQL', @command = @mycommand --Schedule the job at a specified date and time exec sp_add_jobschedule @job_name = @job, @name = 'MySchedule', @freq_type=1, @active_start_date = @startdate, @active_start_time = @starttime -- Add the job to the SQL Server EXEC dbo.sp_add_jobserver @job_name = @job, @server_name = @servername ``` This is a stored procedure named `sp_add_job_quick` that calls 4 `msdb` stored procedures: - `sp_add_job` creates a new job - `sp_add_jobstep` adds a new step in the job - `sp_add_jobschedule` schedules a job for a specific date and time - `sp_add_jobserver` adds the job to a specific server ```sql exec dbo.sp_add_job_quick @job = 'myjob', -- The job name @mycommand = 'sp_who', -- The T-SQL command to run in the step @servername = 'serverName', -- SQL Server name. If running locally, you can use @servername=@@Servername @startdate = '20130829', -- The date August 29th, 2013 @starttime = '160000' -- The time, 16:00:00 ``` ### Execute Python Script On SQL Server Agent Job - <https://stackoverflow.com/questions/54680105/how-to-execute-python-script-as-administrator-in-sql-server-agent-job> ```sql execute sp_execute_external_script @language = N'Python', @script = N' a = 1 b = 2 c = a/b d = a*b print(c, d) ' ``` OR > I was able to solve the problem to my question by adding a credential/proxy account, assigning it to the Run as in the step, and then altering the Command to look like this: > <br> > `C:\Windows\System32\cmd.exe /C python "C:\PythonScripts\myPython.py"` ### Setting up alerts for all sql server agent jobs > Here is a sample T-SQL script that will create the commands for you. This script actually outputs the commands that can then be copied and pasted into a query windows and executed to make the updates. The script could also be changed to automatically issue the commands, but with this version you have the ability to review the commands before you execute them. > > -- <https://www.mssqltips.com/sqlservertip/1091/setting-up-alerts-for-all-sql-server-agent-jobs/> ```sql USE msdb GO DECLARE @operator varchar(50) SET @operator = 'SQLalerts' SELECT 'EXEC msdb.dbo.sp_update_job @job_ID = ''' + convert(varchar(50),job_id) + ''' ,@notify_level_email = 2, @notify_email_operator_name = ''' + @operator + '''' FROM sysjobs ``` When this gets run the following output is created: ```sql EXEC msdb.dbo.sp_update_job @job_ID = '589D2B60-EDBD-45B5-BDE6-4DD974D20D25' ,@notify_level_email = 2, @notify_email_operator_name = 'SQLalerts' EXEC msdb.dbo.sp_update_job @job_ID = '6BE4306C-CC37-4D38-BC27-1B099601EF6A' ,@notify_level_email = 2, @notify_email_operator_name = 'SQLalerts' EXEC msdb.dbo.sp_update_job @job_ID = 'F7569D9A-641E-4130-90F4-535F0B11FC1E' ,@notify_level_email = 2, @notify_email_operator_name = 'SQLalerts' EXEC msdb.dbo.sp_update_job @job_ID = 'CD012AF2-BC96-4D9E-A03E-6ABB2F6048AF' ,@notify_level_email = 2, @notify_email_operator_name = 'SQLalerts' EXEC msdb.dbo.sp_update_job @job_ID = '451C94B4-8BA3-48AA-BB66-D184F0C25556' ,@notify_level_email = 2, @notify_email_operator_name = 'SQLalerts' EXEC msdb.dbo.sp_update_job @job_ID = '7EA95731-1E19-40F6-A5E3-325647DACDE9' ,@notify_level_email = 2, @notify_email_operator_name = 'SQLalerts' ``` ![[2022-03-03-13-46-46.png]] ## DBOS ### Indexes - [Index Maintainence Checklist](https://www.mssqltips.com/sqlservertip/1339/sql-server-index-maintenance-checklist/) #### Clustered Index > A clustered index defines the order in which data is physically stored in a table. Table data can be sorted in only way, therefore, there can be only one clustered index per table. In SQL Server, the primary key constraint automatically creates a clustered index on that particular column. ```sql CREATE DATABASE schooldb CREATE TABLE student ( id INT PRIMARY KEY, -- <== name VARCHAR(50) NOT NULL, gender VARCHAR(50) NOT NULL, DOB datetime NOT NULL, total_score INT NOT NULL, city VARCHAR(50) NOT NULL ) -------------------------------- USE schooldb EXECUTE sp_helpindex student ``` The above query will return this result: ![[Pasted_image_20211201134834.png]] ```sql USE schooldb INSERT INTO student VALUES (6, 'Kate', 'Female', '03-JAN-1985', 500, 'Liverpool'), (2, 'Jon', 'Male', '02-FEB-1974', 545, 'Manchester'), (9, 'Wise', 'Male', '11-NOV-1987', 499, 'Manchester'), (3, 'Sara', 'Female', '07-MAR-1988', 600, 'Leeds'), (1, 'Jolly', 'Female', '12-JUN-1989', 500, 'London'), (4, 'Laura', 'Female', '22-DEC-1981', 400, 'Liverpool'), (7, 'Joseph', 'Male', '09-APR-1982', 643, 'London'), (5, 'Alan', 'Male', '29-JUL-1993', 500, 'London'), (8, 'Mice', 'Male', '16-AUG-1974', 543, 'Liverpool'), (10, 'Elis', 'Female', '28-OCT-1990', 400, 'Leeds'); -- NOTE id's in random order upon insertion -------------------------------- USE schooldb SELECT * FROM student ``` ![[Pasted_image_20211201134934.png]] Creating a new Clustered index: ```sql -- Since there can only be 1 per table, you'll need to delete the original one on the Primary Key use schooldb CREATE CLUSTERED INDEX IX_tblStudent_Gender_Score ON student(gender ASC, total_score DESC) ``` > The characteristics of the best clustering keys can be summarized in few points that are followed by most of the designers: > <br> > **Short**: Although SQL Server allows us to add up to 16 columns to the clustered index key, with maximum key size of 900 bytes, the typical clustered index key is much smaller than what is allowed, with as few columns as possible. The wide Clustered index key will also affect all non-clustered indexes built over that clustered index, as the clustered index key will be used as a lookup key for all the non-clustered indexes pointing to it. > **Static**: It is recommended to choose the columns that are not changed frequently in the clustered index key. Changing the clustered index key values means that the whole row will be moved to the new proper page to keep the data values in the correct order. > **Increasing**: Using an increasing column, such as the IDENTITY column, as a clustered index key will help in improving the INSERT process, that will directly insert the new values at the logical end of the table. This highly recommended choice will help in reducing the amount of memory required for the page buffers, minimize the need to split the page into two pages to fit the newly inserted values and the fragmentation occurrence, that required rebuilding or reorganizing the index again. > **Unique**: It is recommended to declare the clustered index key column or combination of columns as unique to improve the queries performance. Otherwise, SQL Server will automatically add a uniqueifier column to enforce the clustered index key uniqueness. > **Accessed** frequently: This is due to the fact that the rows will be stored in the clustered index in a sorted order based on that index key that is used to access the data. > **Used in the ORDER BY clause**: In this case, no need for the SQL Server Engine to sort the data in order to display it, as the rows are already sorted based on the index key used in the ORDER BY clause. --- - Reference: - <https://www.sqlshack.com/what-is-the-difference-between-clustered-and-non-clustered-indexes-in-sql-server/> - <https://www.sqlshack.com/designing-effective-sql-server-clustered-indexes/> #### non clustered index > A non-clustered index doesn’t sort the physical data inside the table. In fact, a non-clustered index is stored at one place and table data is stored in another place. This is similar to a textbook where the book content is located in one place and the index is located in another. This allows for more than one non-clustered index per table. > <br> > It is important to mention here that inside the table the data will be sorted by a clustered index. However, inside the non-clustered index data is stored in the specified order. The index contains column values on which the index is created and the address of the record that the column value belongs to. > <br> > When a query is issued against a column on which the index is created, the database will first go to the index and look for the address of the corresponding row in the table. It will then go to that row address and fetch other column values. **It is due to this additional step that non-clustered indexes are slower than [[TSQL Clustered Index|s.q.tsql.dbos.indexes.clustered-index]].** ```sql CREATE DATABASE schooldb CREATE TABLE student ( id INT PRIMARY KEY, -- <== name VARCHAR(50) NOT NULL, gender VARCHAR(50) NOT NULL, DOB datetime NOT NULL, total_score INT NOT NULL, city VARCHAR(50) NOT NULL ) -------------------------------- USE schooldb EXECUTE sp_helpindex student ``` The above query will return this result: ![[Pasted_image_20211201134834.png]] ```sql USE schooldb INSERT INTO student VALUES (6, 'Kate', 'Female', '03-JAN-1985', 500, 'Liverpool'), (2, 'Jon', 'Male', '02-FEB-1974', 545, 'Manchester'), (9, 'Wise', 'Male', '11-NOV-1987', 499, 'Manchester'), (3, 'Sara', 'Female', '07-MAR-1988', 600, 'Leeds'), (1, 'Jolly', 'Female', '12-JUN-1989', 500, 'London'), (4, 'Laura', 'Female', '22-DEC-1981', 400, 'Liverpool'), (7, 'Joseph', 'Male', '09-APR-1982', 643, 'London'), (5, 'Alan', 'Male', '29-JUL-1993', 500, 'London'), (8, 'Mice', 'Male', '16-AUG-1974', 543, 'Liverpool'), (10, 'Elis', 'Female', '28-OCT-1990', 400, 'Leeds'); -- NOTE id's in random order upon insertion -------------------------------- USE schooldb SELECT * FROM student ``` ![[Pasted_image_20211201134934.png]] ```sql use schooldb CREATE NONCLUSTERED INDEX IX_tblStudent_Name ON student(name ASC) ``` > The above script creates a non-clustered index on the “name” column of the student table. The index sorts by name in ascending order. As we said earlier, the table data and index will be stored in different places. The table records will be sorted by a clustered index if there is one. The index will be sorted according to its definition and will be stored separately from the table. ![[Pasted_image_20211201135410.png]] #### show index statistics ```sql DBCC SHOW_STATISTICS('ticketer.app.metric', 'CIX_Created_Date'); GO ``` --- - Reference: - [[r.2021.12.20.how-to-think-like-the-sql-server-engine]] ### common table expressions A `CTE` is a temporary result set that you can reference within another `SELECT`, [[insert-into]], [[update]], or [[DevLog/s/q/tsql/syntax/dml/delete|delete]] statement. A `CTE` always returns a result set. They are used to simplify queries, like eliminating a derived table from the main query body: ```sql WITH My_Table (Age, Counts) AS ( SELECT Age, COUNT(Name) AS Counts FROM Source_Table WHERE Age > 55 GROUP BY Age ) SELECT * FROM My_Table ORDER BY Age GO ``` No [[#temp-tables]] assignment, no need to make an intermediary table. I like to this of `CTE`'s like Rice paddy fields where the data like the water just flows downward into the next query or `CTE`. #### Chaining CTE's: ```sql WITH My_Table (Age, Counts) AS ( SELECT Age, COUNT(Name) AS Counts FROM Source_Table WHERE Age > 55 GROUP BY Age ), -- THE COMMA IS THE IMPORTANT PART OF THIS CHAINING My_Second_CTE AS ( SELECT Age, Counts, Average(Counts) FROM My_Table ORDER BY Age GROUP BY Age, Counts ) SELECT * FROM My_Second_CTE ``` CTE's are often faster than [[T-SQL Temp Tables|s.q.tsql.dbos.temp-tables]] #### Recursive CTE Useful when you dont have a defined number of iterations ```sql WITH CTE AS ( SELECT 1 AS n -- Anchor Member UNION ALL SELECT n + 1 -- Recursive Member FROM CTE WHERE n < 50 -- Terminator ) SELECT n FROM CTE; ``` ### Temporary Tables Temp Tables are result sets that are stored in memory for the live of the connection session. There are two types: #### Local - Only available for the session that created them. - Deleted once the session is terminated. - Denoted with a `#` prepended to the table name. #### Global - Available for all sessions and users. - Not deleted until the last session using them is terminated - CAN be explicitly deleted - Denoted with `##` prepended to the table name #### Benefits - Stored in memory and are FAST - Helps to modularize your code instead of monolithic queries #### Usage You CAN simply just create the new table but to use the tables iteratively while testing they need to be explicitly deleted for re-use. The best method for this is the following code: ```sql IF OBJECT_ID('Tempdb.dbo.#table') IS NOT NULL DROP TABLE #table SELECT * INTO #table FROM other_table WHERE Age > 55 ``` Template Version ```sql IF OBJECT_ID('Tempdb.dbo.#<Temp Table Name, Table,>') IS NOT NULL DROP TABLE #<Temp Table Name, Table,> ``` ### Triggers - <https://www.mssqltips.com/sqlservertip/5909/sql-server-trigger-example/> - <https://docs.microsoft.com/en-us/sql/t-sql/statements/create-trigger-transact-sql?view=sql-server-ver15#remarks-for-dml-triggers> ```sql USE AdventureWorks2012; GO IF OBJECT_ID ('Purchasing.LowCredit','TR') IS NOT NULL DROP TRIGGER Purchasing.LowCredit; GO -- This trigger prevents a row from being inserted in the Purchasing.PurchaseOrderHeader table -- when the credit rating of the specified vendor is set to 5 (below average). CREATE TRIGGER Purchasing.LowCredit ON Purchasing.PurchaseOrderHeader AFTER INSERT AS IF (ROWCOUNT_BIG() = 0) RETURN; IF EXISTS (SELECT 1 FROM inserted AS i JOIN Purchasing.Vendor AS v ON v.BusinessEntityID = i.VendorID WHERE v.CreditRating = 5 ) BEGIN RAISERROR ('A vendor''s credit rating is too low to accept new purchase orders.', 16, 1); ROLLBACK TRANSACTION; RETURN END; GO -- This statement attempts to insert a row into the PurchaseOrderHeader table -- for a vendor that has a below average credit rating. -- The AFTER INSERT trigger is fired and the INSERT transaction is rolled back. INSERT INTO Purchasing.PurchaseOrderHeader (RevisionNumber, Status, EmployeeID, VendorID, ShipMethodID, OrderDate, ShipDate, SubTotal, TaxAmt, Freight) VALUES ( 2 ,3 ,261 ,1652 ,4 ,GETDATE() ,GETDATE() ,44594.55 ,3567.564 ,1114.8638 ); GO ``` ### Views In SQL, a view is a virtual table based on the result-set of an SQL statement. A view contains rows and columns, just like a real table. The fields in a view are fields from one or more real tables in the database. Views can be used when we you need a physical table but query a certain table with certain criteria often, you can use a view for this. can be queried like most normal tables: ```sql SELECT * FROM v_view ``` ## System Resources ### Extended Stored Procedures #### xp_readerrorlog [[( 2022-03-03 How to Read Log File in Sql Server Using Tsql]] ### Stored Procedure #### master ##### sp_executesql [[( 2022-03-03 Introduction to the Sp_executesql Stored Procedure with Examples]] ##### sp_MSforeachdb ```sql -- Example usage EXEC sp_MSforeachdb @command --Example 1 --This query will return a listing of all tables in all databases on a SQL instance: DECLARE @command varchar(1000) SELECT @command = 'USE ? SELECT name FROM sysobjects WHERE xtype = ''U'' ORDER BY name' EXEC sp_MSforeachdb @command --This query will return a listing of all tables in all databases on a SQL instance: EXEC sp_MSforeachdb 'USE ? SELECT name FROM sysobjects WHERE xtype = ''U'' ORDER BY name' --Example 2 --This statement creates a stored procedure in each user database that will return a listing of all users in a database, sorted by their modification date DECLARE @command varchar(1000) SELECT @command = 'IF ''?'' NOT IN(''master'', ''model'', ''msdb'', ''tempdb'') BEGIN USE ? EXEC('' CREATE PROCEDURE spNewProcedure1 AS SELECT name, createdate, updatedate FROM sys.sysusers ORDER BY updatedate DESC '') END' EXEC sp_MSforeachdb @command -- You should take note that the USE ? statement is contained within the BEGIN...END block. It is important to remember key T-SQL rules and account for them. In this case the rule that when creating a procedure, the CREATE PROCEDURE phrase must be the first line of code to be executed. To accomplish this you can encapsulate the CREATE PROCEDURE code within an explicit EXEC() function. ``` #### msdb ##### sp_send_dbmail ###### Reference <https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-send-dbmail-transact-sql?view=sql-server-ver15> > "The parameters `@recipients`, `@copy_recipients`, and `@blind_copy_recipients` are semicolon-delimited lists of e-mail addresses. At least one of these parameters must be provided, or `sp_send_dbmail` returns an error." ###### Setup 1. in [[DevLog/s/q/tsql/tools/sql server management studio]] expand the `Management` folder in the Object Explorer 2. Right click on `Database Mail` > `Configure Database Mail` > `Next` 3. `Manage Database Mail accounts and profiles` in the radio button selection list 4. `Next` 5. `Create a new account` this is the email account that will actually be sending the emails for you - Gmail is great here because it can be used as a pass through thought you might need to enable additional access in the settings menu of gmail - The outcome of this is that you receive emails from that Gmail account but in an automated fashion 6. New account details - Account name and description are just for your reference but use them in tandem with the Display name to use an account for each of your Services - Email address is the service account email address - Display name is what it looks like the email is actually from - reply email is who the replies go to - server name for gmail would be `smtp.gmail.com` with port number 587 - The ssl checkbox should be ticked - Use basic authentication (radio button) - username is the service account email address - password is the password for the service account email address 7. `Create a new profile`, this is what will be holding the account(s) that send email 8. Give it a name and description 9. add your email account you made to it 10. finish 11. test `sp_send_dbmail` ###### Examples ###### Simple Message ```sql EXEC msdb.dbo.sp_send_dbmail @profile_name = 'Adventure Works Administrator', @recipients = '[email protected]', @body = 'The stored procedure finished successfully.', @subject = 'Automated Success Message' ; ``` ###### Email message with the results of a query ```sql EXEC msdb.dbo.sp_send_dbmail @profile_name = 'Adventure Works Administrator', @recipients = '[email protected]', @query = 'SELECT COUNT(1) FROM AdventureWorks2012.Production.WorkOrder WHERE 1 = 1 AND DueDate > ''2004-04-30'' AND DATEDIFF(dd, ''2004-04-30'', DueDate) < 2' , @subject = 'Work Order Count', @attach_query_result_as_file = 1 ; ``` ###### Sending HTML Email ```sql DECLARE @tableHTML NVARCHAR(MAX) ; SET @tableHTML = N'<H1>Work Order Report</H1>' + N'<table border="1">' + N'<tr><th>Work Order ID</th><th>Product ID</th>' + N'<th>Name</th><th>Order Qty</th><th>Due Date</th>' + N'<th>Expected Revenue</th></tr>' + CAST ( ( SELECT td = wo.WorkOrderID, '', td = p.ProductID, '', td = p.Name, '', td = wo.OrderQty, '', td = wo.DueDate, '', td = (p.ListPrice - p.StandardCost) * wo.OrderQty FROM AdventureWorks.Production.WorkOrder AS wo JOIN AdventureWorks.Production.Product AS p ON wo.ProductID = p.ProductID WHERE 1 = 1 AND DueDate > '2004-04-30' AND DATEDIFF(dd, '2004-04-30', DueDate) < 2 ORDER BY DueDate ASC, (p.ListPrice - p.StandardCost) * wo.OrderQty DESC FOR XML PATH('tr'), TYPE ) AS NVARCHAR(MAX) ) + N'</table>' ; EXEC msdb.dbo.sp_send_dbmail @recipients='[email protected]', @subject = 'Work Order List', @body = @tableHTML, @body_format = 'HTML' ; ``` ## Tips and Tricks ### boost-performance when calling a stored proc from ssis > If you are calling a proc from an SSIS package, there is no need to get the row count of the records impacted by your query. By setting NOCOUNT to ON, you can significantly boost performance due to the drop in network traffic. > > -- <https://tutorials.massstreet.net/v/transact-sql/solutions-to-real-world-problems/lesson-49.-boost-performance-when-calling-a-stored-proc-from-ssis> ```sql USE AdventureWorks2016 DROP PROCEDURE IF EXISTS dbo.usp_NoCountExample GO CREATE PROCEDURE dbo.usp_NoCountExample AS BEGIN SET NOCOUNT ON; SELECT * FROM Sales.SalesOrderHeader soh JOIN Sales.SalesOrderDetail sod ON soh.SalesOrderID = sod.SalesOrderID SET NOCOUNT OFF; END GO ``` ### calculate age ```sql -- The '365.25' uses .25 to account for leap year FLOOR(DATEDIFF(DAY, <birthdate>, GETDATE())/365.25) AS Age ``` ### cleaner code in the where clause In your code start your WHERE clause with a `WHERE TRUE` so that all logical statements are in `AND` statement on subsequent lines and therefore more easily commented out. ```sql SELECT * FROM dbo.my_table WHERE TRUE AND person_id > 5 AND age BETWEEN 18 AND 65 AND first_name LIKE '%Paul%' ``` ### datawarehouse calendar table #### Date Dimension Table ```sql /**************************************************************************************************************************\ |===========================================================================================================================| | | | | | Reference Article: https://www.mssqltips.com/sqlservertip/4054/creating-a-date-dimension-or-calendar-table-in-sql-server/ | | | | | |===========================================================================================================================| ***************************************************************************************************************************/ -- FIXME Change all instances of `Ticketer` to the database chosen to house these DBO's USE Ticketer IF OBJECT_ID('Ticketer.dbo.HolidayDimension') IS NOT NULL DROP TABLE HolidayDimension IF OBJECT_ID('Ticketer.dbo.DateDimension') IS NOT NULL DROP TABLE DateDimension IF OBJECT_ID('Ticketer.dbo.TheCalendar') IS NOT NULL DROP VIEW TheCalendar -- prevent set or regional settings from interfering with -- interpretation of dates / literals SET DATEFIRST 7 -- 1 = Monday, 7 = Sunday , DATEFORMAT mdy , LANGUAGE US_ENGLISH; -- Start and End points DECLARE @StartDate DATE = '20100101'; DECLARE @CutoffDate DATE = DATEADD(DAY, -1, DATEADD(YEAR, 30, @StartDate)); -- First CTE generating the list of numbers from the Start and End points ;WITH seq(n) AS ( SELECT 0 UNION ALL SELECT n + 1 FROM seq WHERE n < DATEDIFF(DAY, @StartDate, @CutoffDate) ), d(d) AS ( -- Second CTE generating an ISO 8601 Date from the numbers SELECT DATEADD(DAY, n, @StartDate) FROM seq ), src AS ( -- Third CTE taking the generated dates and parsing them into the pertinent information SELECT TheDate = CONVERT(DATE, d) , TheDay = DATEPART(DAY, d) , TheDayName = DATENAME(WEEKDAY, d) , TheWeek = DATEPART(WEEK, d) , TheISOWeek = DATEPART(ISO_WEEK, d) , TheDayOfWeek = DATEPART(WEEKDAY, d) , TheMonth = DATEPART(MONTH, d) , TheMonthName = DATENAME(MONTH, d) , TheQuarter = DATEPART(Quarter, d) , TheYear = DATEPART(YEAR, d) , TheFirstOfMonth = DATEFROMPARTS(YEAR(d), MONTH(d), 1) , TheLastOfYear = DATEFROMPARTS(YEAR(d), 12, 31) , TheDayOfYear = DATEPART(DAYOFYEAR, d) FROM d ), dim AS ( -- Fourth CTE adding additional information of value SELECT TheDate , TheDay , TheDaySuffix = CONVERT(CHAR(2), CASE WHEN TheDay / 10 = 1 THEN 'th' ELSE CASE RIGHT(TheDay, 1) WHEN '1' THEN 'st' WHEN '2' THEN 'nd' WHEN '3' THEN 'rd' ELSE 'th' END END) , TheDayName , TheDayOfWeek , TheDayOfWeekInMonth = CONVERT(TINYINT, ROW_NUMBER() OVER (PARTITION BY TheFirstOfMonth, TheDayOfWeek ORDER BY TheDate)) , TheDayOfYear , IsWeekend = CASE WHEN TheDayOfWeek IN ( CASE @@DATEFIRST WHEN 1 THEN 6 WHEN 7 THEN 1 END, 7 ) THEN 1 ELSE 0 END , TheWeek , TheISOweek , TheFirstOfWeek = DATEADD(DAY, 1 - TheDayOfWeek, TheDate) , TheLastOfWeek = DATEADD(DAY, 6, DATEADD(DAY, 1 - TheDayOfWeek, TheDate)) , TheWeekOfMonth = CONVERT(TINYINT, DENSE_RANK() OVER (PARTITION BY TheYear, TheMonth ORDER BY TheWeek)) , TheMonth , TheMonthName , TheFirstOfMonth , TheLastOfMonth = MAX(TheDate) OVER (PARTITION BY TheYear, TheMonth) , TheFirstOfNextMonth = DATEADD(MONTH, 1, TheFirstOfMonth) , TheLastOfNextMonth = DATEADD(DAY, -1, DATEADD(MONTH, 2, TheFirstOfMonth)) , TheQuarter , TheFirstOfQuarter = MIN(TheDate) OVER (PARTITION BY TheYear, TheQuarter) , TheLastOfQuarter = MAX(TheDate) OVER (PARTITION BY TheYear, TheQuarter) , TheYear , TheISOYear = TheYear - CASE WHEN TheMonth = 1 AND TheISOWeek > 51 THEN 1 WHEN TheMonth = 12 AND TheISOWeek = 1 THEN -1 ELSE 0 END , TheFirstOfYear = DATEFROMPARTS(TheYear, 1, 1) , TheLastOfYear , IsLeapYear = CONVERT(BIT, CASE WHEN TheYear % 400 = 0 OR (TheYear % 4 = 0 AND TheYear % 100 <> 0) THEN 1 ELSE 0 END) , Has53Weeks = CASE WHEN DATEPART(WEEK, TheLastOfYear) = 53 THEN 1 ELSE 0 END , Has53ISOWeeks = CASE WHEN DATEPART(ISO_WEEK, TheLastOfYear) = 53 THEN 1 ELSE 0 END , YYYYMM = CONCAT(TheYear, CASE WHEN LEN(TheMonth) = 1 THEN CONCAT('0', TheMonth) ELSE TheMonth END) , MMYYYY = CONVERT(CHAR(2), CONVERT(CHAR(8), TheDate, 101)) + CONVERT(CHAR(4), TheYear) , Style101 = CONVERT(CHAR(10), TheDate, 101) , Style103 = CONVERT(CHAR(10), TheDate, 103) , Style112 = CONVERT(CHAR(8), TheDate, 112) , Style120 = CONVERT(CHAR(10), TheDate, 120) FROM src ) -- CREATE THE DATE DIMENSION TABLE SELECT * INTO dbo.DateDimension FROM dim ORDER BY TheDate OPTION (MAXRECURSION 0); CREATE UNIQUE CLUSTERED INDEX CIX_DateDimension ON dbo.DateDimension(TheDate); -- CREATE THE HOLIDAY DIMENSION TABLE CREATE TABLE dbo.HolidayDimension ( TheDate DATE NOT NULL, HolidayText NVARCHAR(255) NOT NULL, CONSTRAINT FK_DateDimension FOREIGN KEY(TheDate) REFERENCES dbo.DateDimension(TheDate) ); CREATE CLUSTERED INDEX CIX_HolidayDimension ON dbo.HolidayDimension(TheDate); ;WITH x AS ( SELECT TheDate , TheFirstOfYear , TheDayOfWeekInMonth , TheMonth , TheDayName , TheDay , TheLastDayOfWeekInMonth = ROW_NUMBER() OVER ( PARTITION BY TheFirstOfMonth, TheDayOfWeek ORDER BY TheDate DESC ) FROM dbo.DateDimension ), s AS ( SELECT TheDate , HolidayText = CASE WHEN (TheDate = TheFirstOfYear) THEN 'New Year''s Day' WHEN (TheDayOfWeekInMonth = 3 AND TheMonth = 1 AND TheDayName = 'Monday') THEN 'Martin Luther King Day' -- (3rd Monday in January) WHEN (TheDayOfWeekInMonth = 3 AND TheMonth = 2 AND TheDayName = 'Monday') THEN 'President''s Day' -- (3rd Monday in February) WHEN (TheMonth = 3 AND TheDay = 31) THEN 'Cesar Chavez Day' WHEN (TheLastDayOfWeekInMonth = 1 AND TheMonth = 5 AND TheDayName = 'Monday') THEN 'Memorial Day' -- (last Monday in May) WHEN (TheMonth = 7 AND TheDay = 4) THEN 'Independence Day' -- (July 4th) WHEN (TheDayOfWeekInMonth = 1 AND TheMonth = 9 AND TheDayName = 'Monday') THEN 'Labour Day' -- (first Monday in September) WHEN (TheMonth = 11 AND TheDay = 11) THEN 'Veterans'' Day' -- (November 11th) WHEN (TheDayOfWeekInMonth = 4 AND TheMonth = 11 AND TheDayName = 'Thursday') THEN 'Thanksgiving Day' -- (Thanksgiving Day ()fourth Thursday in November) WHEN (TheMonth = 12 AND TheDay = 25) THEN 'Christmas Day' END FROM x WHERE (TheDate = TheFirstOfYear) -- New Years OR (TheDayOfWeekInMonth = 3 AND TheMonth = 1 AND TheDayName = 'Monday') -- MLK Day OR (TheDayOfWeekInMonth = 3 AND TheMonth = 2 AND TheDayName = 'Monday') -- Presidents Day OR (TheMonth = 3 AND TheDay = 31) -- Cesar Chavez Day OR (TheLastDayOfWeekInMonth = 1 AND TheMonth = 5 AND TheDayName = 'Monday') -- Memorial Day OR (TheMonth = 7 AND TheDay = 4) -- Independence Day OR (TheDayOfWeekInMonth = 1 AND TheMonth = 9 AND TheDayName = 'Monday') -- Labor Day OR (TheMonth = 11 AND TheDay = 11) -- Veterans Day OR (TheDayOfWeekInMonth = 4 AND TheMonth = 11 AND TheDayName = 'Thursday') -- Thanksgiving Day OR (TheMonth = 12 AND TheDay = 25) -- Christmas Day ) INSERT dbo.HolidayDimension(TheDate, HolidayText) SELECT TheDate, HolidayText FROM s UNION ALL SELECT DATEADD(DAY, 1, TheDate), 'Black Friday' -- Special Case FROM s WHERE HolidayText = 'Thanksgiving Day' ORDER BY TheDate; GO CREATE VIEW dbo.TheCalendar AS SELECT d.* , IsHoliday = CASE WHEN h.TheDate IS NOT NULL THEN 1 ELSE 0 END , h.HolidayText FROM dbo.DateDimension AS d LEFT OUTER JOIN dbo.HolidayDimension AS h ON d.TheDate = h.TheDate; ``` ### date stamping inserts ```sql ALTER TABLE dbo.Table ADD CONSTRAINT Constraint_Name DEFAULT GETDATE() FOR UploadedToSQL ``` if just adding this in the initial table definition you can just define a table as: ```sql UploadedToSQL DATETIME NOT NULL DEFAULT GETDATE(), ``` and on inserts for that column just pass the value `DEFAULT` ### deleting a large table iteratively Neat way to use TOP to delete / truncate Deleting a large table takes a lot of resources and causes a lot of locks Looping query that deletes 100k rows per iteration until 0 rows remain When using TOP with a DELETE statement, you have to use the new syntax - using "( )" parenthesis Allows us to delete a very large table without using a lot of peak resources Deletes in chunks, allows parts to be deleted, very handy technique ```sql -- do delete in chunks to minimize max lock/resources updateMore: DELETE TOP(100000) DB.dbo.big_table IF @@rowcount !=0 goto updateMore; ``` ### edit all rows in a gui > By right-clicking on the table name I select the command "`Edit Top 200 Rows`". By the way, the number of rows loaded with this command can be changed by the option "`Tools > Options > SQL Server Object Explorer > Commands > Value for Edit top <n> Rows command`". If `0` is entered, all rows or options are loaded. > > -- <https://smartstore.com/en/edit-database-values-using-sql-server-management-studio#:~:text=By%20right%2Dclicking%20on%20the,rows%20or%20options%20are%20loaded.> ![[2022-03-15-14-30-50.png]] ![[2022-03-15-14-31-14.png]] ### generate a parameter list for all sql server stored procedures and functions [[( 2022-03-03 Generate a Parameter List for All Sql Server Stored Procedures and Functions]] ### reduce code and save time with default column values > If you have a column that represents the datetime of when a record was loaded, there is no reason for you to write CURRENT_TIMESTAMP over and over when you insert data into that table. > Instead, when you create the table, specify that column’s default value of GETDATE. You can do it programmatically or in the table designer. > Below is an example of a data warehouse staging table template containing a primary key and standard audit columns. As you can see, each audit column has been assigned a default value there by reducing your insert statement by six lines. > Considering you will be writing thousands of procs over your career, the time saved from not having to write six lines is significant. > > -- <https://tutorials.massstreet.net/v/transact-sql/solutions-to-real-world-problems/lesson-51.-reduce-code-and-save-time-with-default-column-values> ```sql USE demo CREATE TABLE YourSchemaName.YourStageTableName( [ETLKey] [uniqueidentifier] NOT NULL, [UniqueDims] [varbinary](35) NULL, [UniqueRows] [varbinary](16) NULL, [SourceSystem] [nvarchar](255) NULL, [Cleansed] [bit] NULL, [ErrorRecord] [bit] NULL, [ErrorReason] [nvarchar](255) NULL, [Processed] [bit] NULL, [RunDate] [datetime] NULL, CONSTRAINT [PK_YourStageTableName] PRIMARY KEY CLUSTERED ( [ETLKey] ASC ) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO ALTER TABLE [YourSchemaName].[YourStageTableName] ADD CONSTRAINT [DF_YourStageTableName_ETLKey] DEFAULT (newid()) FOR [ETLKey] GO ALTER TABLE [YourSchemaName].[YourStageTableName] ADD CONSTRAINT [DF_YourStageTableName_SourceSystem] DEFAULT (N'Copia') FOR [SourceSystem] GO ALTER TABLE [YourSchemaName].[YourStageTableName] ADD CONSTRAINT [DF_YourStageTableName_Cleansed] DEFAULT ((0)) FOR [Cleansed] GO ALTER TABLE [YourSchemaName].[YourStageTableName] ADD CONSTRAINT [DF_YourStageTableName_ErrorRecord] DEFAULT ((0)) FOR [ErrorRecord] GO ALTER TABLE [YourSchemaName].[YourStageTableName] ADD CONSTRAINT [DF_YourStageTableName_Processed] DEFAULT ((0)) FOR [Processed] GO ALTER TABLE [YourSchemaName].[YourStageTableName] ADD CONSTRAINT [DF_YourStageTableName_RunDate] DEFAULT (getdate()) FOR [RunDate] GO ``` ### rename objects ```sql ALTER DATABASE [Test1] MODIFY NAME = [Test2] --or sp_renamedb 'Test1' , 'Test2 --or sp_rename 'Test1', 'Test2', 'DATABASE'; ``` ### select data into xml - <https://www.sqlshack.com/working-with-xml-data-in-sql-server/> ```sql SELECT CAST('<Name><FName>Carol</FName><LName>Elliot</LName></Name>' AS XML) USE AdventureWorks2012 GO SELECT Cust.CustomerID, OrderHeader.CustomerID, OrderHeader.SalesOrderID, OrderHeader.Status FROM Sales.Customer Cust INNER JOIN Sales.SalesOrderHeader OrderHeader ON Cust.CustomerID = OrderHeader.CustomerID FOR XML AUTO; SELECT * FROM ticketer.sp.ticket for xml path ('Ticket'), root('Tickets') ``` ### sending notification emails with t sql without using hardcoded email addresses ```sql USE demo DECLARE @OperatorName sysname = N'YourOperatorName'; DECLARE @OperatorEmailAddress nvarchar(100) = (SELECT email_address FROM msdb.dbo.sysoperators WHERE [name] = @OperatorName); PRINT @OperatorEmailAddress ``` ### setting up queries for ablation testing > Sometimes you write a complex SQL statement that doesn’t work, and you do not know why. The best way to figure out what is wrong is by ablation testing. > Ablation testing is where you turn things off until you isolate the problem. In the case of SQL queries, this involves commenting out lines of a complex WHERE clause (or JOIN) until you find the thing that is not working as designed. > However, if your WHERE clause starts with a filter, you have to move it so you can comment it out. If you start your WHERE clause with 1 = 1, which is always true, then you preserve syntax and can easily comment out your filters without having to rewrite your code. > No matter how simple your SQL statement is, I recommend that you get in the habit of creating ablation testing ready SQL statements. > > -- <https://tutorials.massstreet.net/v/transact-sql/solutions-to-real-world-problems/lesson-50.-setting-up-queries-for-ablation-testing> ```sql USE AdventureWorks2016 SELECT * FROM Sales.SalesOrderHeader soh JOIN Sales.SalesOrderDetail sod ON soh.SalesOrderID = sod.SalesOrderID WHERE 1 = 1 ``` #### An Example Of Ablation Testing Here we suspect that two lines in the WHERE clause are causing our query to act up. ```sql USE AdventureWorks2016 SELECT CONCAT(p.FirstName, ' ',p.LastName) AS SalesPerson, DATEPART(month,soh.OrderDate) AS MonthOfSale, SUM(sod.LineTotal) AS TotalSales FROM Person.Person p JOIN Sales.SalesPerson sp ON p.BusinessEntityID = sp.BusinessEntityID JOIN Sales.SalesOrderHeader soh ON sp.BusinessEntityID = soh.SalesPersonID JOIN Sales.SalesOrderDetail sod ON soh.SalesOrderID = sod.SalesOrderDetailID WHERE 1 = 1 AND sp.Bonus BETWEEN 3000.00 AND 6000.00 --AND sp.SalesYTD >= 2000000.00 --AND sod.UnitPrice < 2000 AND (YEAR(soh.OrderDate) BETWEEN 2014 AND 2013 OR YEAR(soh.OrderDate) = 2011) GROUP BY p.LastName, p.FirstName, DATEPART(month,soh.OrderDate) ORDER BY p.LastName, p.FirstName, DATEPART(month,soh.OrderDate) ``` ### sql server script to rebuild all indexes for all tables and all databases [[( 2022-03-03 Sql Server Script to Rebuild All Indexes for All Tables and All Databases]] ### using hashbytes to compare character strings > If you have to compare string values on a regular basis, and especially if you have to compare more than one column at a time, instead of comparing the columns directly, hash the columns first. Hashing creates a string based on the encryption algorithm you select. You can pass in concatenated string values. The ultimate benefit is that all the algorithms generate a value that is a set number of characters. This comes in handy for storing this value when it comes time for loading junk dimensions into data warehouses. > You cannot use `HASHBYTES()` on `NVARCHAR(MAX)` in SQL Server. `HASHBYTES` has a 4,000-character limit for `NVARCHAR`. Technically, you can get around this programmatically, but if your data is longer than 4,000 characters, I’d suggest looking for another way to identify the record. > > -- <https://tutorials.massstreet.net/v/transact-sql/solutions-to-real-world-problems/lesson-63.-using-hashbytes-to-compare-character-strings> ```sql DECLARE @Statement1 NVARCHAR(255) DECLARE @Statement2 NVARCHAR(255) SET @Statement1 = 'Army And Navy Play For Second' SET @Statement2 = 'Rock Chalk Jayhawk' PRINT LEN(@Statement1) PRINT LEN(@Statement2) PRINT HASHBYTES('MD5', @Statement1) PRINT HASHBYTES('MD5', @Statement2) PRINT LEN(HASHBYTES('MD5', @Statement1)) PRINT LEN(HASHBYTES('MD5', @Statement2)) -- 29 -- 18 -- 0x9A6A731EB1D5FE8F3B93A576A9740E3E -- 0xFDEFE8D5B2FF212060E733A0A01A1A7F -- 16 -- 16 ``` ```sql DECLARE @Statement1 NVARCHAR(255) DECLARE @Statement2 NVARCHAR(255) SET @Statement1 = 'WalMart' SET @Statement2 = 'walmart' PRINT HASHBYTES('MD5', @Statement1) --0xDE2E98EE55B1B249B711300DE7047C75 PRINT HASHBYTES('MD5', @Statement2) --0xC48604C9A656E09D87E99B820499D430 ``` #### Using Pipe To Hash Multiple Columns For Matching <https://tutorials.massstreet.net/v/transact-sql/solutions-to-real-world-problems/lesson-64.-using-pipe-to-hash-multiple-columns-for-matching> > When using the `HASHBYTES()` function in SQL Server to hash multiple columns for matching, use `CONCAT()` and separate values with a pipe. > Usually, the values in columns are disparate enough that you really do not have to worry. For example, it was years before I actually discovered an edge case. However, I did discover at least one scenario where concatenating the columns was not enough to develop a unique record. This occurred in a table with few columns and small amounts of data. > A better approach is to just `CONCAT()` columns with a pipe between values. Since this character is rarely used, it lowers the probability of having different values hash the same. ```sql DECLARE @SampleStageTable AS TABLE(ID INT, VALUE1 NVARCHAR(10), VALUE2 NVARCHAR(10)) INSERT INTO @SampleStageTable(ID, VALUE1, VALUE2) SELECT 1, '012','345' UNION SELECT 2, '01','2345' SELECT * FROM @SampleStageTable ------------------------------------------------------------------------------------- DECLARE @SampleStageTable AS TABLE(ID INT, VALUE1 NVARCHAR(10), VALUE2 NVARCHAR(10)) INSERT INTO @SampleStageTable(ID, VALUE1, VALUE2) SELECT 1, '012','345' UNION SELECT 2, '01','2345' SELECT ID, VALUE1, VALUE2, HASHBYTES('MD5', CONCAT(VALUE1, VALUE2)) AS ROWHASH FROM @SampleStageTable ``` So separate columns with a `|` ```sql DECLARE @SampleStageTable AS TABLE(ID INT, VALUE1 NVARCHAR(10), VALUE2 NVARCHAR(10)) INSERT INTO @SampleStageTable(ID, VALUE1, VALUE2) SELECT 1, '012','345' UNION SELECT 2, '01','2345' SELECT ID, VALUE1, VALUE2, HASHBYTES('MD5', CONCAT(VALUE1,'|', VALUE2)) AS ROWHASH FROM @SampleStageTable ``` ### wildcard search use index ![[2022-01-24-13-36-52.png]] You want to know the coolest SQL hack I learned this week? In SQL, we often need to search character fields using the percentage (`%`) wildcard. When I put the wildcard at the end of the search string (`String%`) it uses the index, but if I put it at the front (`%String`) it does a scan of the index. This significantly increases your run time! ```sql -- Example: SELECT * FROM table WHERE column1 LIKE 'Some_string%' -- vs. SELECT * FROM table WHERE column1 LIKE '%Some_string' ``` We say that option 1 is Sargable (Search ARGument ABLE), and option 2 is not Sargable, meaning option 2 could not leverage the index on the column. To ensure all your wildcard queries are Sargable and to significantly decrease your run time, do the following: ```sql SELECT * FROM table WHERE REVERSE(column1) LIKE REVERSE('Some_string') + '%' ``` And boom, your query runs much, much faster! ### yyyymm date part calculation ```sql SELECT DATEPART(YEAR, GETDATE()) * 100 + DATEPART(MONTH, GETDATE()) AS 'YearMo' ``` ## tools ### tsql test - <https://www.red-gate.com/products/sql-development/sql-test/> - <https://www.mssqltips.com/sqlservertip/5195/free-database-unittesting-framework-for-sql-server/> - <https://www.mssqltips.com/sqlservertip/5195/free-database-unittesting-framework-for-sql-server/> - <https://tsqlt.org/user-guide/> - <https://tsqlt.org/user-guide/quick-start/> - <https://tsqlt.org/use-tsqlt-framework/> - <https://www.red-gate.com/simple-talk/databases/sql-server/t-sql-programming-sql-server/getting-started-testing-databases-with-tsqlt/> #### Setup - Download the `.zip` - Run the `prepare-serve.sql` script once per server - execute `example.sql` for a sample database and all objects - Only implement in your dev environment never production. - If using RedGate products, try to add the `tSQLt` objects to your gitignore #### Assertions - <https://tsqlt.org/user-guide/assertions/> ```sql EXEC tSQLt.AssertEmptyTable @TableName = 'dbo.actual'; -- AssertEmptyTable checks if a table is empty. If the table does contain any rows, the failure message displays all rows found. EXEC tSQLt.AssertEquals @Expected = 'hello', @Actual = 'hello'; -- AssertEquals compares two values for equality. NULL is considered equal to NULL. Any non-NULL value is considered not equal to NULL. EXEC tSQLt.AssertEqualsString @Expected = 'John Smith', @Actual = CONCAT_WS(' ', 'John', 'Smith'); -- Similar to AssertEquals but only between 2 strings /* The following datatypes are known to be unsupported by AssertEqualsTable: text, ntext, image, xml, geography, geometry, rowversion and CLR datatypes that are not marked comparable and byte ordered. AssertEqualsTable compares the contents of two tables for equality. It does this by comparing each row of the tables for an exact match on all columns. If the tables do not contain the same data, the failure message displays which rows could not be matched. */ EXEC tSQLt.AssertEqualsTable @Expected = 'expected', @Actual = 'actual'; -- Above Block ^^^ EXEC tSQLt.AssertEqualsTableSchema @Expected = 'expected', @Actual = 'actual'; -- Raises a ‘failure’ error if the contents (Columns, etc.) of the expected table and the actual table are not equal. EXEC tSQLt.AssertLike @ExpectedPattern = 'John%Smith', @actual = 'John Smith';-- AssertLike checks if the actual value matches the expected pattern. If it does not match EXEC tSQLt.AssertNotEquals @Expected = 'expected', @Actual = 'actual'; --AssertNotEquals compares two values for inequality. EXEC tSQLt.AssertObjectDoesNotExist @objectName = 'dbo.MyProcedure'; -- AssertObjectDoesNotExists checks to see that an object with the specified name does not exists in the database. If the name begins with a ‘#’, indicating it is a temporary object (such as a temporary table), then tempdb is checked for the object. EXEC tSQLt.AssertObjectExists @objectName = 'dbo.MyProcedure'; -- AssertObjectExists checks to see if an object with the specified name exists in the database. If the object name begins with a ‘#’, indicating it is a temporary object (such as a temporary table), then tempdb is searched for the object. /* AssertResultSetsHaveSameMetaData executes the expected command and actual command, capturing the result sets from each. The meta data (i.e. the column names and properties) are compared between the two result sets. If they meta data contains differences, then AssertResultSetsHaveSameMetaData fails the test. This may be useful, for example, when testing a stored procedure which returns a result set and the names and data types of the columns should be validated. */ EXEC tSQLt.AssertResultSetsHaveSameMetaData @expectedCommand = 'SELECT [Name] = ''John Smith''', @actualCommand = 'SELECT [Name] = CONCAT_WS('' '', ''John'', ''Smith'')' -- Above Block ^^^ EXEC tSQLt.Fail 'Invalid random value returned: ', '700'; -- Fails the test case with optional message params ``` #### Expectations ```sql /* tSQLt.ExpectException marks the point in the test after which an error should be raised. All parameters are optional. Independent of the supplied parameters, the test fails if after the tSQLt.ExpectException call no error is raised. Passing in a NULL in any parameter has the same effect as omitting that parameter. The parameters allow to constrain the expected exception further. There can be only one call to tSQLt.ExpectException per test. However, a call to tSQLt.ExpectException can follow a call to tSQLt.ExpectNoException. */ -- 2 Main call patterns EXEC tSQLt.ExpectException @ExpectedMessage = 'Some Expected Message', @ExpectedSeverity = NULL, @ExpectedState = NULL; EXEC tSQLt.ExpectException @ExpectedMessagePattern = '%Part of Expected Message%', @ExpectedSeverity = NULL, @ExpectedState = NULL; -- Example: Using tSQLt.ExpectException to check that correct error is raised CREATE PROCEDURE PurgeTableTests.[test dbo.PurgeTable rejects not existing table] AS BEGIN EXEC tSQLt.ExpectException @Message = 'Table dbo.DoesNotExist not found.', @ExpectedSeverity = 16, @ExpectedState = 10; EXEC dbo.PurgeTable @TableName = 'dbo.DoesNotExist'; END ``` ```sql /* tSQLt.ExpectNoException marks the point in the test after which no error should be raised. tSQLt.ExpectNoException specifies that the intention of the test is to assert that no error is raised. Therefore the test will fail instead of error, if an error is encountered after tSQLt.ExpectNoException was called. There can be only one call to tSQLt.ExpectNoException per test. However, a call to tSQLt.ExpectNoException can be followed by a call to tSQLt.ExpectException. */ EXEC tSQLt.ExpectNoException @Message = 'supplemental fail message' -- Example: Using tSQLt.ExpectNoException to assert that no error is raised CREATE PROCEDURE PurgeTableTests.[test dbo.PurgeTableIfExists ignores not existing table] AS BEGIN EXEC tSQLt.ExpectNoException; EXEC dbo.PurgeTableIfExists @TableName = 'dbo.DoesNotExist'; END ``` #### Isolating Dependencies ```sql /* We want to be able to test constraints individually. We can use FakeTable to remove all the constraints on a table, and ApplyConstraint to add back in the one which we want to test. ApplyConstraint in combination with FakeTable allows constraints to be tested in isolation of other constraints on a table. ApplyConstraint works with the following constraint types: - CHECK constraints - FOREIGN KEY constraints - UNIQUE constraints - PRIMARY KEY constraints Cascade properties of FOREIGN KEY constraints are not preserved. SQL Server automatically creates unique indexes for UNIQUE and PRIMARY KEY constraints. Those indexes for “applied” constraints do not preserve asc/desc properties of the original supporting indexes. */ EXEC tSQLt.ApplyConstraint @TableName = 'dbo.ReferencingTable', @ConstraintName = 'ReferencingTable_ReferencedTable_FK'; EXEC tSQLt.ApplyTrigger @TableName = 'dbo.TargetTable', @TriggerName = 'trAuditInserts' -- We want to be able to test triggers individually. We can use FakeTable to remove all the constraints and triggers from a table, and ApplyTrigger to add back in the one which we want to test. ``` ```sql /* Code that calls a function can be difficult to test if that function performs significant logic. We want to isolate the code we are testing from the logic buried in the functions that it calls. To create independent tests, we can replace a called function with a fake function. The fake function will perform much simpler logic that supports the purpose of our test. Often, the fake function will simply return a hard-coded value. Alternatively, the fake function may ‘validate’ the parameters it receives by returning one value if the parameters match expectations, and another value if the parameters do not match expectations. That way the code that calls the function will have a different result and thus the parameter passed to the function can be tested. Example: Using FakeFunction to avoid executing the logic of a complex function */ EXEC tSQLt.FakeFunction @FunctionName = 'dbo.ComputeCommission', @FakeFunctionName = 'dbo.Fake_ComputeCommission' -------------------------------------------------------------------------------- EXEC tSQLt.NewTestClass 'SalesAppTests'; GO CREATE FUNCTION SalesAppTests.Fake_ComputeCommission (@EmployeeId INT, @RevenueFromSales DECIMAL(10, 4)) RETURNS DECIMAL(10, 4) AS BEGIN RETURN 1234.5678; END CREATE PROCEDURE SalesAppTests.[test SalesReport returns revenue and commission] AS BEGIN -------Assemble EXEC tSQLt.FakeFunction 'SalesApp.ComputeCommission', 'SalesAppTests.Fake_ComputeCommission'; EXEC tSQLt.FakeTable 'SalesApp.Employee'; EXEC tSQLT.FakeTable 'SalesApp.Sales'; INSERT INTO SalesApp.Employee (EmployeeId) VALUES (1); INSERT INTO SalesApp.Sales (EmployeeId, SaleAmount) VALUES (1, 10.1); INSERT INTO SalesApp.Sales (EmployeeId, SaleAmount) VALUES (1, 20.2); -------Act SELECT EmployeeId, RevenueFromSales, Commission INTO SalesAppTests.Actual FROM SalesApp.SalesReport; -------Assert SELECT TOP(0) * INTO SalesAppTests.Expected FROM SalesAppTests.Actual; INSERT INTO SalesAppTests.Expected (EmployeeId, RevenueFromSales, Commission) VALUES (1, 30.3, 1234.5678); EXEC tSQLt.AssertEqualsTable 'SalesAppTests.Expected', 'SalesAppTests.Actual'; END ``` ```sql /* We want to keep our test cases focused and do not want to insert data which is irrelevant to the current test. However, table and column constraints can make this difficult. FakeTable allows tests to be written in isolation of the constraints on a table. FakeTable creates an empty version of the table without the constraints in place of the specified table. Therefore any statements which access the table during the execution of the test case are actually working against the fake table with no constraints. When the test case completes, the original table is put back in place because of the rollback which tSQLt performs at the end of each test case. FakeTable can be called on Tables, Views and Synonyms (A Synonym has to point to a table or view in the current database.) */ EXEC tSQLt.FakeTable @TableName = 'dbo.CurrencyConversion' -- See Above ^^^ /* Often times an object needs to be replaced with a mock. The first step in replacing an object with a mock is to remove the original object. tSQLt.RemoveObject removes the original object by renaming it. The new name for the object is automatically generated to avoid collisions with other objects. Example: Replacing a function with a stub In this example, we are testing that GetRecentUsers view returns only users from the past 10 minutes. In this example, we have a util.GetCurrentDate function which returns the current date and is used by GetRecentUsers. For the purposes of our test, we want to return a constant date. RemoveObject is called so that we can create a stub function which returns a hard-coded value. */ EXEC tSQLt.RemoveObject @ObjectName = 'object name' , @NewName = 'new object name' OUTPUT , @IfExists = ( 0 | 1 ) EXEC tSQLt.RemoveObjectIfExists @ObjectName = 'object name' , @NewName = 'new object name' OUTPUT -- tSQLt.RemoveObjectIfExists is a short form of tSQLt.RemoveObject with @IfExist = 1 ``` ```sql /* Large monolithic stored procedures are difficult to test and maintain. We want to keep our stored procedures small and focused. We also want to test our stored procedures independently of one another. To create independent tests, we can replace the functionality of a stored procedure with a spy. The spy will record the parameters that were passed to it. SpyProcedure allows tests to be written for a procedure in isolation of the other procedures that it calls. SpyProcedure creates a table with the name of @ProcedureName + ‘_SpyProcedureLog’. This table contains an identity column ‘_id_’ and a column for each procedure parameter (except for cursor output parameters). SpyProcedure also replaces the procedure named by @ProcedureName with the command provided in the @CommandToExecute parameter and a command to insert the parameter values into the SpyProcedureLog table. Therefore, whenever the @ProcedureName is executed during the test instead of actually running the procedure, a new log entry is made in the @ProcedureName_SpyProcedureLog table and @CommandToExecute is called. SpyProcedure can not be used with temporary stored procedures (stored procedures whose name begins with #). SpyProcedure can not be used with procedures which have more than 1020 columns. Example: Using SpyProcedure to record parameters passed to a procedure In this example, we have a sales report which will show either current or historical data based on a parameter. Here we’ll want to test that the SalesReport procedure handles the parameter correctly and calls either HistoricalReport or CurrentReport. We’ll use the assertEqualsTable to make sure the currency parameter is passed correctly to HistoricalReport by looking in the spy’s log. */ EXEC tSQLt.SpyProcedure @ProcedureName = 'procedure name', @CommandToExecute = 'command' -------------------------------------------------------------------------------- CREATE PROCEDURE testFinancialApp.[test that SalesReport calls HistoricalReport when @showHistory = 1] AS BEGIN -------Assemble EXEC tSQLt.SpyProcedure 'FinancialApp.HistoricalReport'; EXEC tSQLt.SpyProcedure 'FinancialApp.CurrentReport'; -------Act EXEC FinancialApp.SalesReport 'USD', @showHistory = 1; SELECT currency INTO actual FROM FinancialApp.HistoricalReport_SpyProcedureLog; -------Assert HistoricalReport got called with right parameter SELECT currency INTO expected FROM (SELECT 'USD') ex(currency); EXEC tSQLt.AssertEqualsTable 'actual', 'expected'; -------Assert CurrentReport did not get called IF EXISTS (SELECT 1 FROM FinancialApp.CurrentReport_SpyProcedureLog) EXEC tSQLt.Fail 'SalesReport should not have called CurrentReport when @showHistory = 1'; END /* Example: Using SpyProcedure to return a hard-coded set of output parameter values Suppose we want to test the procedure, IsDiskSpaceTooLow, which returns a 0 if there is enough disk space, and -1 if there is not enough disk space. IsDiskSpaceTooLow conveniently calls another procedure, GetDiskSpace which returns an output parameter for the current disk space. Since setting up a test to fill the drive to a certain size is probably a bad idea, we can test IsDiskSpaceTooLow by using SpyProcedure on GetDiskSpace and hard-coding the output parameter for the purposes of the test. */ CREATE PROCEDURE DiskUtil.GetDiskSpace @DiskSpace INT OUT AS BEGIN -- This procedure does something to return the disk space as @DiskSpace output parameter END GO CREATE PROCEDURE DiskUtil.IsDriveSpaceTooLow AS BEGIN DECLARE @DiskSpace INT; EXEC DiskUtil.GetDiskSpace @DiskSpace = @DiskSpace OUT; IF @DiskSpace < 512 RETURN -1; ELSE RETURN 0; END CREATE PROCEDURE testDiskUtil.[test IsDriveSpaceTooLow returns -1 if drive space is less than 512 MB] AS BEGIN EXEC tSQLt.SpyProcedure 'DiskUtil.GetDiskSpace', 'SET @DiskSpace = 511'; DECLARE @ReturnValue INT; EXEC @ReturnValue = DiskUtil.IsDriveSpaceTooLow; EXEC tSQLt.AssertEquals -1, @ReturnValue; END ``` #### Usage - "Test Classes" are a test schema. - "Test Cases" the things that are actually executed are stored procedures - Test Cases Need to be under the Test Class Schema, and their name must start with `test` so like: `TryItOut.[test this is what it should look like]` ##### Create/Delete A Test Classes ```sql EXEC tSQLt.NewTestClass 'TryItOut' -- Create a new "Test Class" EXEC tSQLt.DropClass 'TryItOut' -- Clean up the test class when finished ``` ###### Rename an existing Test Class ```sql EXEC tSQLt.RenameClass @SchemaName = 'class name', @NewSchemaName = 'new class name' ``` ##### Create New Test Cases ```sql CREATE PROCEDURE TryItOut.[test this causes a failure] AS BEGIN -- Create a new test case that fails EXEC tSQLt.Fail 'This is what a failure looks like'; END CREATE PROCEDURE TryItOut.[test this one passes] AS BEGIN -- Create a new test case that passes DECLARE @sum INT = (SELECT @sum = 1 + 2); EXEC tSQLt.AssertEquals 3, @sum; END ``` ##### Executing Your Test Cases ```sql EXEC tSQLt.RunAll -- Execute all test cases in the database EXEC tSQLt.RunTestClass 'TryItOut' -- Execute only test cases for the test class 'TryItOut' even if there are others EXEC tSQLt.Run @TestName = 'TryItOut.[test this one passes]' -- This will execute only a single named test in case you only want to test specific ones and not a whole suite ``` ### ssrs - <https://docs.microsoft.com/en-us/sql/reporting-services/create-a-basic-table-report-ssrs-tutorial?view=sql-server-2017> ### ssis #### Resources - <https://www.mssqltips.com/sqlservertip/6409/extract-import-and-migrate-ssis-project/> - <https://www.mssqltips.com/sqlservertip/3676/sql-server-integration-services-2016-incremental-package-deployment/> - <https://www.timmitchell.net/post/2017/05/31/deleting-a-package-from-the-ssis-catalog/> #### KT Performed | Date | Link | |------------|--------------------------------| | 2022-09-21 | <https://youtu.be/f7UKX5MjZA8> | | 2022-09-21 | <https://youtu.be/G_wG-bzTCZY> | #### SSIS Logging - <https://www.sqlshack.com/overview-of-ssis-package-logging/> ### sql server management studio #### Tips and Tricks ##### custom-color-theme-settings-import-export <https://www.sentryone.com/blog/aaronbertrand/making-ssms-pretty-my-dark-theme> ##### database-diagrams-can-build-your-tables-and-relationships In the diagrams pane if you make a new diagram you can make your tables and their relationships in a GUI all at once ### sql agent #### resources - <https://www.mssqltips.com/sqlservertip/6111/query-sql-server-agent-jobs-job-steps-history-and-schedule-system-tables/> - <https://www.mssqltips.com/sqlservertip/5969/manage-sql-server-agent-history-more-effectively/> - <https://www.mssqltips.com/sqlservertip/5731/how-to-pass-data-between-sql-server-agent-job-steps/> - <https://www.mssqltips.com/sqlservertip/5911/run-python-scripts-in-sql-server-agent/> - <https://www.mssqltips.com/sqlservertip/1394/how-to-store-longer-sql-agent-job-step-output-messages/> #### check current staus of sql agent jobs ```sql SELECT job_id, name, enabled FROM msdb.dbo.sysjobs -- or this version SELECT SJ.job_id , SJ.name , SJ.enabled , SC.name AS category FROM msdb.dbo.sysjobs AS SJ INNER JOIN msdb.dbo.syscategories AS SC ON SJ.category_id = SC.category_id ``` #### toggle all sql server agent jobs ##### Disable ###### Blanket Disable ```sql USE MSDB; GO DECLARE @job_id uniqueidentifier DECLARE job_cursor CURSOR READ_ONLY FOR SELECT job_id FROM msdb.dbo.sysjobs WHERE enabled = 1 OPEN job_cursor FETCH NEXT FROM job_cursor INTO @job_id WHILE @@FETCH_STATUS = 0 BEGIN EXEC msdb.dbo.sp_update_job @job_id = @job_id, @enabled = 0 FETCH NEXT FROM job_cursor INTO @job_id END CLOSE job_cursor DEALLOCATE job_cursor ``` ###### Disable by name ```sql USE MSDB; GO DECLARE @job_id uniqueidentifier DECLARE job_cursor CURSOR READ_ONLY FOR SELECT job_id FROM msdb.dbo.sysjobs WHERE enabled = 1 AND [name] like N'Admin%' OPEN job_cursor FETCH NEXT FROM job_cursor INTO @job_id WHILE @@FETCH_STATUS = 0 BEGIN EXEC msdb.dbo.sp_update_job @job_id = @job_id, @enabled = 0 FETCH NEXT FROM job_cursor INTO @job_id END CLOSE job_cursor DEALLOCATE job_cursor ``` ###### Disable Jobs By Job Category ```sql USE MSDB; GO DECLARE @job_id uniqueidentifier DECLARE job_cursor CURSOR READ_ONLY FOR SELECT SJ.job_id FROM msdb.dbo.sysjobs SJ INNER JOIN msdb.dbo.syscategories AS SC ON SJ.category_id = SC.category_id WHERE SJ.enabled = 1 AND SC.[name] = N'Database Maintenance' OPEN job_cursor FETCH NEXT FROM job_cursor INTO @job_id WHILE @@FETCH_STATUS = 0 BEGIN EXEC msdb.dbo.sp_update_job @job_id = @job_id, @enabled = 0 FETCH NEXT FROM job_cursor INTO @job_id END CLOSE job_cursor DEALLOCATE job_cursor ``` ##### Enable ```sql USE MSDB; GO DECLARE @job_id uniqueidentifier DECLARE job_cursor CURSOR READ_ONLY FOR SELECT job_id FROM msdb.dbo.sysjobs WHERE enabled = 0 OPEN job_cursor FETCH NEXT FROM job_cursor INTO @job_id WHILE @@FETCH_STATUS = 0 BEGIN EXEC msdb.dbo.sp_update_job @job_id = @job_id, @enabled = 1 FETCH NEXT FROM job_cursor INTO @job_id END CLOSE job_cursor DEALLOCATE job_cursor ``` ### mssql There is a cli client that can access SQL Server from the cli and it works on all platforms allowing for queries to be ran and for output to be sent to the cli. #### Potential use cases - Cronjobs to routinely run scripts that insert data at regular intervals or on certain system events - join with other scripts and tools like [[cli.cmd.ledger-cli]], [[s.l.bash]], [[cli.cmd.fzf]], etc to make for a robust workflow to manage finances #### Installation ```bash pip install mssql-cli ``` ### redgate #### Resources - **The Tools:** <https://www.red-gate.com/products/sql-development/sql-source-control/> - **About it:** <https://www.youtube.com/watch?v=aR5IHfHvh98> - [Version Control Demo With Git](https://youtu.be/mNXipSFbV0s) #### Workflow ##### Deploying New DB from TEST to PROD 1. Create same name copy of DB onto PROD 2. Link that DB with the working folder you've been working out of 3. `Get Latest` 4. Apply changes - If there are changes on the remote Repository you can pull those too to apply those changes as well ##### Every Day Workflow - This assumes all setup and connection work has been done for redgate work - The following examples assume you're performing all this work on 1 SQL Server: `TEST` and that downstream, migration scripts will be applied to push those changes automatically onto the `PROD` server without human intervention. ###### Getting Started and Setup - Main repository is Repo @ `PROD` - They pull the repo and branch down to local machine - People make their own branch off of `PROD` that will contains the changes they want for specific work - they are now up to date and ready to work - Cloning the database you want to work in <!-- - Right click the database you want to change - `Tasks > Copy Database` --> - Open [[s.q.tsql.tools.redgate.products.sql-compare]] - Select Source Server and Database - Select Target Server and in the database selection dropdown type the name of your new DEV copy of the database - The new empty database naming convention being DATABASE_BSJ_DEV - the database name in all caps, followed by the developers initials, and ended with DEV so people know its someones development copy - When the name is ready click the `Create` link below the dropdown - Continue through the Sql Compare menus and no need to save or backup the migration scripts - After copying of the database is complete you're ready to begin work if changes are only schema related - Right click that new DB and `Link with source control` - `Link with my source control system` - Select the folder location of the database from within the repository - `Get Latest` if applicable - `Apply Changes` if applicable - Developer is now free to make changes to schema, objects, etc inside their test database - If data is also required within the dev copy from the source database then open [[s.q.tsql.tools.redgate.products.data-compare]] - Select Source Server and Database - Select Target Server and in the database selection dropdown type the name of your new DEV copy of the database you made prior - No need to save the migration scripts and run everything within the Data Compare tool for deployment - Select the data to copy across databases - `Deploy` - Now your schema and data all match across the servers and databases - Right click that new DB and `Link with source control` - `Link with my source control system` - Select the folder location of the database from within the repository - Select the folder location of the database from within the repository - `Get Latest` if applicable - `Apply Changes` if applicable - Developer is now free to make changes to schema, objects, etc inside their test database ###### Getting your change into the Pull Request process - Make your changes, New UDF, new USP, modify existing items, make new tables, alter other tables, etc. - When ready, or incrementally throughout the process add your changes to your branch via commits - when you are ready to enter the PR process push your branch to the remote repository - open pull request of your branch into the master branch - Normal PR process occurs here - Once PR is merged then at that point CI/CD pipelines should take over to build, test, and deploy the changes to the `PROD` server with no human intervention. - **IF NO AUTOMATED PROCESS YET PROCEED TO SECTION BELOW** ####### Manual implementation of new changes in the absence of a CI/CD pipeline <!-- - Back in [[s.db.ms-sql-server.tools.ssms]] select your target database (the one you wanted to make changes to not your DEV copy of it) - Got to the SQL Source Control Interface and `Get Latest` - Apply Changes --> - Open [[r.+.redgate-training.sql-change-automation]] - Pick your source and target for changes - Based on this proposed workflow your source would be the "master" copy of the Database on the `TEST` server - Based on this proposed workflow your target would be the same Database on the `PROD` server - Select desired changes to be implemented - Click `Deploy` to generate a migration script to implement changes without modification to the repository code - If the individual cannot run the code then pass it to someone who can as a final deliverable artifact to be executed and discarded #### implementation 1. Make Empty Git repository on Azure DevOps 2. Make sure git repo has a master branch on the remote - If you cannot make this happen, it can be fixed later and might just have to be 3. Setup [[s.q.tsql.tools.redgate.implementation.changelog-database]] - Or at least make sure all users have followed steps to get added to changelog DB 4. if no remote master branch then errors will occur in the local repository - To correct errors of missing `origin/master` run `git branch --unset-upstream` since an upstream `master` branch doesn't yet exist - This will arise when you add a DB to the source control and try to enter the commit screen 5. Make a new folder for each Version Controlled DB in your Repo - Convention being used is to name each folder the same as the DB's name but in all CAPS 6. Setup Source control filters so you essentially have gitignore functionality in the repository 7. [[s.q.tsql.tools.redgate.implementation.link-a-development-database-to-your-source-control-system]] - If applicable [[s.q.tsql.tools.redgate.implementation.linking-static-data]] ##### Changing Databases ###### Process To track WHO is changing things (basically the source control `Last Changed By` column) We needed a change log database [Guide](https://documentation.red-gate.com/soc7/configuring/log-changes-to-shared-databases) 1. Running the following code to create the database ```sql USE master EXECUTE ('CREATE DATABASE ChangeLog') ALTER DATABASE ChangeLog SET ANSI_NULL_DEFAULT OFF ALTER DATABASE ChangeLog SET ANSI_NULLS OFF ALTER DATABASE ChangeLog SET ANSI_PADDING OFF ALTER DATABASE ChangeLog SET ANSI_WARNINGS OFF ALTER DATABASE ChangeLog SET ARITHABORT OFF ALTER DATABASE ChangeLog SET AUTO_CLOSE OFF ALTER DATABASE ChangeLog SET AUTO_CREATE_STATISTICS ON ALTER DATABASE ChangeLog SET AUTO_SHRINK OFF ALTER DATABASE ChangeLog SET AUTO_UPDATE_STATISTICS ON ALTER DATABASE ChangeLog SET READ_WRITE ALTER DATABASE ChangeLog SET RECOVERY SIMPLE ALTER DATABASE ChangeLog SET MULTI_USER ALTER DATABASE ChangeLog SET PAGE_VERIFY CHECKSUM ALTER DATABASE ChangeLog SET DB_CHAINING ON EXECUTE ('USE ChangeLog IF NOT EXISTS (SELECT * FROM sys.sysusers WHERE name=''guest'') EXECUTE sp_grantdbaccess guest') ``` 2. - Close [[s.apps.ssms]] 3. - Navigate to `%localappdata%\Red Gate\SQL Source Control 7` 4. - Open `RedGate_SQLSourceControl_Engine_EngineOptions.xml` in a text editor 5. - Below the `EngineOptions version` line, add: 1. - `<TraceCacheDatabase>ChangeLog</TraceCacheDatabase>` 2. - **The above is case sensative** 6. - Save and close the file. ###### Important Points - Each developer **must** have `dbo_owner` permissions for the change log database. - You can delete the change log database, **but history about changes will be permanently deleted**. - SQL Source Control **will only use the change log database to save information about changes to linked databases**. It won't be used for any other purpose. ##### link a development database to your source control system [Linking a development database to your source control system][https://www.red-gate.com/hub/university/courses/sql-source-control/sql-source-control/getting-started/linking-development-database-source-control-system] - Can link your database to a repository solution - Right click on database - link database with source control - link to my source control system - Select your source control **which for ADO? git?** - Browse for target folder - Click on it - Add commit Message - OK - select either (See [[#shared-vs-dedicated-development-model]]) - dedicated database - shared database - When link is complete the DB icon will turn green - You need to start by committing all your objects - Click on "Commit" in Redgate source control - add commit message - Commit - this will export all your objects as create scripts to your source control ##### Linking static data tables 1. Right click a database in [[DevLog/s/q/tsql/tools/sql server management studio|sql server management studio]] 2. `Other SQL Source Control tasks` 3. `Link or unlink static data` 4. `save and close` 5. Right click a database in [[DevLog/s/q/tsql/tools/sql server management studio|sql server management studio]] 6. `Commit changes to source control` 7. In source control pane you will now see a commit with the data link change type that shows the diff as a bunch of insert statements 8. Commit message and commit - Result is that in your source control repository there is now a `Data/` directory with a SQL file in it consisting of the insert statements. - **To source control the data on your tables they need a PK** #### Products ##### data compare ###### Setup 1. Close [[s.apps.ssms]] and Visual Studio 2. Run Installers ##### deploy ###### Setup 1. Close [[s.apps.ssms]] and Visual Studio 2. Run Installers 3. Enter [[s.apps.ssms]] and enter license key ##### flyway desktop ###### Setup 1. Close [[s.apps.ssms]] and Visual Studio 2. Run Installers ##### sql change automation ###### Setup 1. Close [[s.apps.ssms]] and Visual Studio 2. Run Installers ##### sql compare ###### Setup 1. Close [[s.apps.ssms]] and Visual Studio 2. Run Installers ##### sql scripts manager ###### Setup 1. Close [[DevLog/s/q/tsql/tools/sql server management studio|sql server management studio]] and Visual Studio 2. Run Installers 3. Make sure you have an up to date copy of the SQL Server Repo installed for the shared scripts folder 4. `File > Application Options > ...` 5. change scripts folder location to the location of the teams shared scripts ##### sql search ###### Setup 1. Close [[s.apps.ssms]] and Visual Studio 2. Run Installers ## Syntax ### apply operator `CROSS APPLY` = [[inner-join]] ```sql SELECT * FROM Department D CROSS APPLY ( SELECT * FROM Employee E WHERE E.DepartmentID = D.DepartmentID ) A -- The result sets from these would have been identical SELECT * FROM Department D INNER JOIN Employee E ON D.DepartmentID = E.DepartmentID ``` `OUTER APPLY` = [[left-join]] ```sql SELECT * FROM Department D OUTER APPLY ( SELECT * FROM Employee E WHERE E.DepartmentID = D.DepartmentID ) A -- The result sets from these would have been identical SELECT * FROM Department D LEFT OUTER JOIN Employee E ON D.DepartmentID = E.DepartmentID ``` Great uses for Apply - [SO Examples for apply](https://stackoverflow.com/questions/9275132/real-life-example-when-to-use-outer-cross-apply-in-sql) - [Examples of main use cases](https://riptutorial.com/sql/example/8323/cross-apply-and-outer-apply-basics) ### between predicate The `BETWEEN` predicate is used to signify an **inclusive** range of values for a condition. You use `BETWEEN` in conjunction with the `AND` operator to mark the 2 ranges your result values should fall between. ```sql SELECT Age FROM Person WHERE Age BETWEEN 55 AND 65 -- This includes both the ages 55 and 65 as it is inclusive ``` ### case statements With case statements in T-SQL you can in essence insert a bunch of logic and calculation into a multi-faceted case statement with the output being a single result field ```sql SELECT P.Age ,CASE WHEN P.Age > 55 THEN 1 ELSE 0 END AS 'RetirementAge' FROM Person AS P ``` This case statement examines the `Age` field and asks if each value i `> 55` if it is, the result field is assigned a `1` if `Age` is not `> 55` then the `ELSE` statement is executed and the result field receives the value of `0`. The ultimate of this being a boolean flag column indicating if each individual is at `RetirementAge` or not. You can also nest several `WHEN ... THEN` statements for a multi faceted Case statement: ```sql SELECT P.Age ,CASE WHEN P.Age > 65 THEN 'Senior Citizen' WHEN P.Age > 55 THEN 'Mature Adult' WHEN P.Age > 40 THEN 'Mid-Life Crisis' ELSE 'No Discount' END AS 'DiscountLevel' FROM Person AS P ``` ### comments Single line comments are denoted with double dashes while multi-line comments follow popular convention is C like languages like [[cpp]] or [[javascript]]. ```sql -- a Single line comment /* A Multi-line comment good for in-file documentation */ ``` ### cursors > Cursors are a really inefficient way to loop, so they should only be used in those cases where your process is so inefficient that a cursor is the only thing less efficient than your process. A good example of this is loading large amounts of data between tables. ```sql USE demo DECLARE @Year INT DECLARE @Month INT DECLARE BatchingCursor CURSOR FOR SELECT DISTINCT YEAR([SomeDateField]),MONTH([SomeDateField]) FROM [Sometable]; OPEN BatchingCursor; FETCH NEXT FROM BatchingCursor INTO @Year, @Month; WHILE @@FETCH_STATUS = 0 BEGIN BEGIN TRANSACTION --All logic goes in here --Any select statements from [Sometable] need to be suffixed with: --WHERE Year([SomeDateField])=@Year AND Month([SomeDateField])=@Month COMMIT TRANSACTION FETCH NEXT FROM BatchingCursor INTO @Year, @Month; END; CLOSE BatchingCursor; DEALLOCATE BatchingCursor; GO ``` ### Data Definition Language #### Create ```sql CREATE TABLE Database.schema.table_name ( ID INT PRIMARY KEY IDENTITY(1,1), Name VARCHAR(50), Misc VARCHAR(50) ) ``` The primary key here is using the [Identity()](https://docs.microsoft.com/en-us/sql/t-sql/functions/identity-function-transact-sql?view=sql-server-ver15) function. The default setting is that this table will have a Clustered index we can override this explicitly with: ```sql CREATE TABLE Database.schema.table_name ( ID INT NOT NULL IDENTITY(1,1), Name VARCHAR(50), Misc VARCHAR(50) PRIMARY KEY NONCLUSTERED ( ID ASC ) ) ``` [MSDN Docs](https://docs.microsoft.com/en-us/sql/relational-databases/indexes/create-clustered-indexes?view=sql-server-ver15) recommend that most situations unless explicitly defined should just use clustered indexes ##### Script new table If creating a new table with similar structure to an existing one, right click the existing table in SSMS `SCRIPT TABLE AS` --> `CREATE TO` --> `NEW QUERY EDITOR WINDOW` #### Alter `ALTER` allows you to add, delete, or modify columns in an existing table. ```sql ALTER TABLE Table_Name -- and one of the following ADD Column_Name Datatype DROP COLUMN Column_Name ALTER COLUMN Column_Name Datatype [NULL | NOT NULL] ``` #### Drop `DROP TABLE` lets us Delete a table from a data base not just its data like [[truncate]] or [[DevLog/s/q/tsql/syntax/dml/delete|delete]]. Can add `IF EXISTS` starting from SQL Server 2016. cannot drop a table that is referenced by a foreign key constraint. ```sql drop TABLE My_Table -- OR drop TABLE IF EXISTS My_Table ``` Similar to temp tables dropping where `IF NOT NULL` is similar to `IF EXISTS`: ### Data Manipulation Language #### SELECT #### INSERT - Allows us to add rows to an existing table - also used in the generation of temp tables - [[insert-into]] #### update `UPDATE` Allows us to update a set of data in a table that can be specified by the `WHERE` criteria. ```sql UPDATE My_Table SET <Column1> = [updated_value1] ,<Column2> = [updated_value2] ,[...] WHERE <filter(s)> (Optional) ``` So in essence: ```sql BEGIN TRAN UPDATE database.schema.table SET Misc = 'Fourth' OUTPUT Inserted.* WHERE ID = 4 SELECT * FROM database.schema.table ROLLBACK COMMIT ``` - Uses elements: - [[begin-transaction]] - [[rollback]] - [[commit]] - [OUTPUT](https://docs.microsoft.com/en-us/sql/t-sql/queries/output-clause-transact-sql?view=sql-server-ver15) ##### Example ###### Before | ID | Name | Misc | | -- | ---- | -------- | | 1 | AAA | First | | 2 | BBB | Second | | 3 | CCC | Third | | 4 | DDD | **NULL** | ###### After | ID | Name | Misc | | --- | ---- | ------ | | 1 | AAA | First | | 2 | BBB | Second | | 3 | CCC | Third | | 4 | DDD | Fourth | #### Delete `DELETE` Allows us to delete data from a table based on the filters specified in the `WHERE` clause. ```sql DELETE FROM My_Table WHERE <Filters> ``` `DELETE` is fully logged and can be an expensive operation if deleting a lot of data #### Truncate `TRUNCATE` Allows us to delete all rows from a table without removing the table from the database ```sql TRUNCATE TABLE My_Table ``` `TRUNCATE` is minimally logged and will perform far faster than delete will. #### distinct ```sql SELECT DISTINCT <COLUMN> FROM <TABLE> ``` Selects only unique values from that table. If there are multiple columns then its every unique pairing of all values in those 2 columns that occur and the same goes on for every additional column: ```sql SELECT DISTINCT <COLUMN 1>, [COLUMN 2], [...] FROM <TABLE> ``` #### group by ```sql SELECT Age, Name, COUNT(Name) FROM Person GROUP BY Age HAVING Age > 50 ``` The `GROUP BY` statement consolidates the records based on the aggregate function used. The aggregate function could be something like `COUNT()` or `SUM()` etc. and it says group by `Age`. So it will find each distinct `Age` and then group together a `COUNT()` or `SUM()` of all the records for each of those unique `Age`'s. The filtering occurs post-aggregation when `HAVING` is applied to not return the aggregates for any `Age`'s that are _> 50_ #### insert into ```sql INSERT INTO database.schema.table (<col1>, <col2>, <col3>) VALUES (<val1>, <val2>, <val3>) , (<val1>, <val2>, <val3>) , (<val1>, <val2>, <val3>)r -- OR INSERT INTO database.schema.table SELECT <columns> FROM <other_table> ``` `INSERT INTO` is also used with temp table creation: #### like predicate Used with [[wild-cards]], the `LIKE` predicate looks at a [[wild-cards]] string and runs the [[Regular Expressions]] pattern matching operation. ```sql SELECT Name FROM Person WHERE Name LIKE 'Jo*' -- wild card matching on strings /* Result would be any of these: John Jon Johnny Jonny Etc. */ ``` ### functions #### concat_ws Instead of eplicitly adding each separator ```sql SELECT CONCAT(first_name, ' ', last_name) FROM person_table; ``` > `CONCAT_WS(<separator>, <data>)` ```sql SELECT CONCAT_WS(' ', first_name, middle_name, last_name) FROM person_table; ``` #### iif [[( 2022-03-16 How to Use These Six Unusual Sql Functions]] #### lag [[( 2022-03-16 How to Use These Six Unusual Sql Functions]] #### lead [[( 2022-03-16 How to Use These Six Unusual Sql Functions]] #### object_id - <https://docs.microsoft.com/en-us/sql/relational-databases/system-catalog-views/sys-objects-transact-sql?view=sql-server-ver15> ```sql USE AdventureWorks2012; GO IF EXISTS OBJECT_ID(N'dbo.AWBuildVersion', N'U') DROP TABLE dbo.AWBuildVersion; GO ``` The additional types at the end can be found at the documentation link above #### raiseerror <https://docs.microsoft.com/en-us/sql/t-sql/language-elements/raiserror-transact-sql?view=sql-server-ver15> ```sql -- Syntax for SQL Server and Azure SQL Database RAISERROR ( { msg_id | msg_str | @local_variable } { ,severity ,state } [ ,argument [ ,...n ] ] ) [ WITH option [ ,...n ] ] ``` ```sql RAISERROR ('A vendor''s credit rating is too low to accept new purchase orders.', 16, 1); ``` ```sql BEGIN TRY -- RAISERROR with severity 11-19 will cause execution to -- jump to the CATCH block. RAISERROR ('Error raised in TRY block.', -- Message text. 16, -- Severity. 1 -- State. ); END TRY BEGIN CATCH DECLARE @ErrorMessage NVARCHAR(4000); DECLARE @ErrorSeverity INT; DECLARE @ErrorState INT; SELECT @ErrorMessage = ERROR_MESSAGE(), @ErrorSeverity = ERROR_SEVERITY(), @ErrorState = ERROR_STATE(); -- Use RAISERROR inside the CATCH block to return error -- information about the original error that caused -- execution to jump to the CATCH block. RAISERROR (@ErrorMessage, -- Message text. @ErrorSeverity, -- Severity. @ErrorState -- State. ); END CATCH; ``` #### sign [[( 2022-03-16 How to Use These Six Unusual Sql Functions]] ### group by #### having ##### The 'HAVING' Clause The having clause is like the `WHERE` clause except you can use aggregate functions like `COUNT()` and `SUM()` with it. ```sql SELECT Age, Name, COUNT(Name) FROM Person GROUP BY Age HAVING COUNT(Name) > 50 ``` #### with rollup ##### The 'RollUp' Sub-Clause ```sql SELECT Age, Name, COUNT(Name) FROM Person GROUP BY Age WITH ROLLUP HAVING Age > 50 ``` Adds an additional record item of the results of the initial aggregation option. So: if you had a grouping of `COUNT()`'s for each `Age` already totaled up, the `ROLLUP` looks at all of those totals and aggregates them into a grand total. ### Hints #### NoLOCK [[( 2022-03-03 Understanding the Sql Server Nolock Hint]] Better way to get dirty reads on tables ```sql USE AdventureWorks2016 SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED SELECT * FROM [Sales].[SalesOrderHeader] soh JOIN [Sales].[SalesOrderDetail] sod ON soh.SalesOrderID = sod.SalesOrderID SET TRANSACTION ISOLATION LEVEL READ COMMITTED ``` ### html email query results ```sql DECLARE @tableHTML NVARCHAR(MAX) = N'<h1>Kells Monthly Invoice</h1>' + N'<table border="1">' + N'<tr>' + N'<th>Date</th>' + N'<th>Source_Account</th>' + N'<th>Category</th>' + N'<th>Cashflow_In</th>' + N'<th>Comment</th>' + N'</tr>' SET @tableHTML = @tableHTML + CAST(( SELECT td = FORMAT(GETDATE(), 'yyyy-MM'), '', td = Source_Account, '', td = Category, '', td = CAST(Cashflow_In AS NVARCHAR), '', td = Comment, '' FROM <table> FOR XML PATH('tr'), TYPE ) AS NVARCHAR(MAX)) DECLARE @total NVARCHAR(10) = CAST((SELECT SUM(Cashflow_In) FROM <table>) AS NVARCHAR) SET @tableHTML = @tableHTML + '<tr><td colspan="3" style="text-align:right;">TOTAL: </td><td>' + @total + '</td><td></td></tr>' SET @tableHTML = @tableHTML + '</table>' ``` ### if else statements Conditional operations. Useful with control [[variables]]: ```sql DECLARE @retirement INT = 1 -- Using a binary flag if @retirement = 1 BEGIN SELECT * FROM My_Table WHERE Retirement = 1 END ELSE BEGIN SELECT * FROM My_Table WHERE Retirement = 0 END ``` ### joins ```sql FROM <TABLE A> [TYPE] JOIN <Table B> ON <A Field> = <B Field> ``` #### cross join A Cartesian Product that matches every unique value from one table to every unique value on another table for the complete list of every unique pairing: | Table | One | | ----- | --- | | 1 | A | | 2 | B | | Table | Two | | ----- | --- | | 1 | C | | 2 | D | | Result | Table | | ------ | ----- | | 1 | AC | | 2 | AD | | 3 | BC | | 4 | BD | #### full outer join This joins preserves both tables and retains all records but where there are matches it links them up | Table | One | | ----- | --- | | 1 | A | | 2 | B | | 3 | C | | 4 | D | | Table | Two | | ----- | ---- | | 1 | C | | 2 | D | | 3 | NULL | | 4 | NULL | | 5 | E | | 6 | F | | Result | Table One | Table Two | | ------ | --------- | --------- | | 1 | A | C | | 2 | B | D | | 3 | C | NULL | | 4 | D | NULL | | 5 | NULL | E | | 6 | NULL | F | #### inner join An inner join is the default join in T-SQL. The `INNER` is optional but it is a best practice to use it. ```sql SELECT M.Age, F.Age -- 2 Columns called 'Age' are returned because of the alias prefix FROM MENS_AGES AS M [INNER] JOIN WOMENS_AGES AS F ON F.Marriage_ID = M.Marriage_ID WHERE M.Age > 55 AND F.Age > 55 ``` The `INNER JOIN` returns records where the criteria being joined upon matches on both tables If you have a table with Men and a table with Women and want to `INNER JOIN` and return the married couples across both tables you'd likely `INNER JOIN` Men to Women on `MARRIAGE_ID` and only the pairs with ID's matching each other will be returned. This means that single people in either table will not be returned at all. _Sorry!_ #### left join The `LEFT JOIN` is Like the inner join except instead of throwing out all the single people, if we `LEFT JOIN` Women to Men on `MARRIAGE_ID` then the only records we're throwing out are _all the single ladies_. The `LEFT JOIN` Keeps every record on the left and adds records that match the condition onto the right side for those matching records. Where there is no match, `NULL` is returned. _Statements in WHERE or ON clauses may override LEFT JOIN_ #### left join where null The same as left join except the left side has records removed from is where are are `NULL`'s in the right side. So this is saying when joining Women to Men on `MARRIAGE_ID` Where `MARRIAGE_ID IS NULL` means you will only see Married Men Returned. No Single Men and no Women at all. #### right join The same as a left join just in reverse. Conceptually easier to understand by just re-writing these as left joins ### merge The process of using `MERGE` works like this. 1. Identify the table you will load data into. 2. Identify the table that you will use as the source of your data. 3. Identify how records in those two tables are connected. 4. Give instructions on what to do when records do not match. 5. Give instructions on what to do when records do match. #### Example ```sql USE demo DROP TABLE IF EXISTS Person DROP TABLE IF EXISTS PersonStageTable CREATE TABLE Person( PersonID BIGINT NOT NULL, FirstName NVARCHAR(50) NULL, LastName NVARCHAR(50) NULL, SourceSystemKey NVARCHAR(50) NULL, ) CREATE TABLE PersonStageTable( PersonID BIGINT NOT NULL, FirstName NVARCHAR(50) NULL, LastName NVARCHAR(50) NULL, SourceSystemKey NVARCHAR(50) NULL, ) INSERT INTO Person(PersonID, FirstName, LastName, SourceSystemKey) SELECT 1, 'Bob', 'Wakefield',1 INSERT INTO PersonStageTable(PersonID, FirstName, LastName, SourceSystemKey) SELECT 1,'Bob','Johnson',1 UNION SELECT 2,'Sally','Ride',2 SELECT * FROM Person SELECT * FROM PersonStageTable --*****Merge example beings here.***** MERGE Person AS target USING ( SELECT PersonID, FirstName, LastName, SourceSystemKey FROM PersonStageTable ) AS source ON (target.SourceSystemKey = source.SourceSystemKey) WHEN NOT MATCHED THEN INSERT ( PersonID, FirstName, LastName, SourceSystemKey ) VALUES ( PersonID, FirstName, LastName, SourceSystemKey ) WHEN MATCHED THEN UPDATE SET target.PersonID = source.PersonID, target.FirstName = source.FirstName, target.LastName = source.LastName, target.SourceSystemKey = source.SourceSystemKey ; SELECT * FROM Person DROP TABLE Person DROP TABLE PersonStageTable ``` Another Example of syntax ```sql MERGE target AS TARGET USING source AS SOURCE ON condition WHEN MATCHED THEN UPDATE WHEN NOT MATCHED BY TARGET THEN INSERT WHEN NOT MATCHED BY SOURCE THEN DELETE; ``` ### operators | Operator | Purpose | Type | | | :------------- | :--------------------------------------------------------------------------------------- | :--------------------------------------------------------------- | ------------------ | | `+` | Addition | Arithmatic | | | `-` | Subtraction | Arithmatic | | | `*` | Multiplaicatio | Arithmatic | | | `/` | Divison | Arithmatic | | | `%` | Modulous division | Arithmatic | | | `ALL` | TRUE if all of a set of comparisons are TRUE. | Logical | | | `AND` | TRUE if both Boolean expressions are TRUE. | Logical | | | `ANY` | TRUE if any one of a set of comparisons are TRUE. | Logical | | | `BETWEEN` | TRUE if the operand is within a range. | Logical | | | `EXISTS` | TRUE if a subquery contains any rows. | Logical | | | `IN` | TRUE if the operand is equal to one of a list of expressions. | Logical | | | `LIKE` | TRUE if the operand matches a pattern. | Logical | | | `NOT` | Reverses the value of any other Boolean operator. | Logical | | | `OR` | TRUE if either Boolean expression is TRUE. | Logical | | | `SOME` | TRUE if some of a set of comparisons are TRUE. | Logical | | | `::` | like [[cpp]] and [[R-Lang]] namespace access | Scope Resolution | | | `=` | Assign values to a variable | Assignment | | | `=` | (Equals) Equal to | Comparison | | | `>` | (Greater Than) Greater than | Comparison | | | `<` | (Less Than) Less than | Comparison | | | `>=` | (Greater Than or Equal To) Greater than or equal to | Comparison | | | `<=` | (Less Than or Equal To) Less than or equal to | Comparison | | | `<>` | (Not Equal To) Not equal to | Comparison | | | `!=` | (Not Equal To) Not equal to (not ISO standard) | Comparison | | | `!<` | (Not Less Than) Not less than (not ISO standard) | Comparison | | | `!>` | (Not Greater Than) Not greater than (not ISO standard) | Comparison | | | [[wild-cards]] | Wild Card Operators | String | | | `+=` | String Concatenation | String | | | `+=` | Adds some amount to the original value and sets the original value to the result. | Compound Operators | | | `-=` | Subtracts some amount from the original value and sets the original value to the result. | Compound Operators | | | `*=` | Multiplies by an amount and sets the original value to the result. | Compound Operators | | | `/=` | Divides by an amount and sets the original value to the result. | Compound Operators | | | `%=` | Divides by an amount and sets the original value to the modulo. | Compound Operators | | | `&=` | Performs a bitwise AND and sets the original value to the result. | Compound Operators | | | `^=` | Performs a bitwise exclusive OR and sets the original value to the result. | Compound Operators | | | \` | =\` | Performs a bitwise OR and sets the original value to the result. | Compound Operators | ### Pivoting the Pivot is like a transcribe operator in excel. You take the top left point, anchor it and then flips the axis' and their values. ```sql SELECT [Name], pvt.Low, pvt.Med, pvt.High, pvt.Ultra FROM Person PIVOT (SUM(Total) FOR [Level] IN (Low, Med, High, Ultra)) AS pvt ``` #### UN-pivoting ```sql SELECT [Name] upvt.[Level] upvt.Total FROM Person UNPIVOT (Total FOR [Level] IN (Low, Med, High, Ultra)) AS upvt ``` #### Advanced Pivoting - <https://www.mssqltips.com/sqlservertip/7167/sql-pivot-grouping-sets-advanced-reporting/> - <https://www.mssqltips.com/sqlservertip/7233/sql-pivot-sql-unpivot-examples-transform-data/> #### Example ##### Before | Name | Level | Total | | ---- | ----- | ----- | | Joe | Low | 34 | | Joe | Med | 65 | | Joe | High | 29 | | Joe | Ultra | 99 | | Tom | Low | 37 | | Tom | Med | 53 | | Tom | High | 24 | | Tom | Ultra | 74 | ##### After | Name | Low | Med | High | Ultra | | ---- | --- | --- | ---- | ----- | | Joe | 34 | 65 | 29 | 99 | | Tom | 37 | 53 | 24 | 74 | ### set statements [MSDN Documentation](https://docs.microsoft.com/en-us/sql/t-sql/statements/set-statements-transact-sql?view=sql-server-ver15) ### SQL Server SET NOCOUNT AND SET ROWCOUNT #### SET ROWCOUNT Although the name, `SET ROWCOUNT` is very similar, it doesn’t impact `@@ROWCOUNT` directly. `SET ROWCOUNT` simply tells SQL Server to stop processing a query after the specified number of rows have been returned, which makes it kind of a “global TOP clause”. In the following example, we’re limiting the rows to 500. The SELECT query itself should return 1,000 rows, but as you can see `@@ROWCOUNT` tells us only 500 were returned. ![[2022-03-03-14-02-14.png]] #### SET NOCOUNT `SET NOCOUNT ON` also doesn’t affect `@@ROWCOUNT`. `SET NOCOUNT` tells SQL Server to stop displaying the message with the number of rows affected by a query. However, `@@ROWCOUNT` is still updated. Let’s illustrate with an example. First the default configuration where `NOCOUNT` is off. ![[2022-03-03-14-03-56.png]] #### quoted identifier <https://docs.microsoft.com/en-us/sql/t-sql/statements/set-quoted-identifier-transact-sql?view=sql-server-ver15> ##### Using the quoted identifier setting and reserved word object names ```sql SET QUOTED_IDENTIFIER OFF GO -- Create statement fails. CREATE TABLE "select" ("identity" INT IDENTITY NOT NULL, "order" INT NOT NULL); GO SET QUOTED_IDENTIFIER ON; GO -- Create statement succeeds. CREATE TABLE "select" ("identity" INT IDENTITY NOT NULL, "order" INT NOT NULL); GO SELECT "identity","order" FROM "select" ORDER BY "order"; GO DROP TABLE "SELECT"; GO SET QUOTED_IDENTIFIER OFF; GO ``` ##### Using the quoted identifier setting with single and double quotation marks ```sql SET QUOTED_IDENTIFIER OFF; GO USE AdventureWorks2012; IF EXISTS(SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = 'Test') DROP TABLE dbo.Test; GO USE AdventureWorks2012; CREATE TABLE dbo.Test (ID INT, String VARCHAR(30)) ; GO -- Literal strings can be in single or double quotation marks. INSERT INTO dbo.Test VALUES (1, "'Text in single quotes'"); INSERT INTO dbo.Test VALUES (2, '''Text in single quotes'''); INSERT INTO dbo.Test VALUES (3, 'Text with 2 '''' single quotes'); INSERT INTO dbo.Test VALUES (4, '"Text in double quotes"'); INSERT INTO dbo.Test VALUES (5, """Text in double quotes"""); INSERT INTO dbo.Test VALUES (6, "Text with 2 """" double quotes"); GO SET QUOTED_IDENTIFIER ON; GO -- Strings inside double quotation marks are now treated -- as object names, so they cannot be used for literals. INSERT INTO dbo."Test" VALUES (7, 'Text with a single '' quote'); GO -- Object identifiers do not have to be in double quotation marks -- if they are not reserved keywords. SELECT ID, String FROM dbo.Test; GO DROP TABLE dbo.Test; GO SET QUOTED_IDENTIFIER OFF; GO ``` ### stored procedures A <u>stored procedure</u> is a group of SQL statements that have been created and stored in the database. A stored procedure will accept input parameters like `@dates` or `@Name`. Run a stored procedure with `exec DB.DBO.tbl` ```sql -- ================================================ -- Template generated from Template Explorer using: -- Create Procedure (New Menu).SQL -- -- Use the Specify Values for Template Parameters -- command (Ctrl-Shift-M) to fill in the parameter -- values below. -- -- This block of comments will not be included in -- the definition of the procedure. -- ================================================ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO -- ============================================= -- Author: <Author,,Name> -- Create date: <Create Date,,> -- Description: <Description,,> -- ============================================= CREATE PROCEDURE <Procedure_Name, sysname, ProcedureName> -- Add the parameters for the stored procedure here <@Param1, sysname, @p1> <Datatype_For_Param1, , int> = <Default_Value_For_Param1, , 0>, <@Param2, sysname, @p2> <Datatype_For_Param2, , int> = <Default_Value_For_Param2, , 0> AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; -- Insert statements for procedure here SELECT <@Param1, sysname, @p1>, <@Param2, sysname, @p2> END GO ``` --- - Reference: - <https://docs.microsoft.com/en-us/sql/relational-databases/stored-procedures/return-data-from-a-stored-procedure?view=sql-server-ver15> #### Returning Data ##### Return via parameter ```sql USE AdventureWorks2012; GO IF OBJECT_ID('Sales.uspGetEmployeeSalesYTD', 'P') IS NOT NULL DROP PROCEDURE Sales.uspGetEmployeeSalesYTD; GO CREATE PROCEDURE Sales.uspGetEmployeeSalesYTD @SalesPerson nvarchar(50), @SalesYTD money OUTPUT AS SET NOCOUNT ON; SELECT @SalesYTD = SalesYTD FROM Sales.SalesPerson AS sp JOIN HumanResources.vEmployee AS e ON e.BusinessEntityID = sp.BusinessEntityID WHERE LastName = @SalesPerson; RETURN GO ``` or ```sql -- Declare the variable to receive the output value of the procedure. DECLARE @SalesYTDBySalesPerson money; -- Execute the procedure specifying a last name for the input parameter -- and saving the output value in the variable @SalesYTDBySalesPerson EXECUTE Sales.uspGetEmployeeSalesYTD N'Blythe', @SalesYTD = @SalesYTDBySalesPerson OUTPUT; -- Display the value returned by the procedure. PRINT 'Year-to-date sales for this employee is ' + convert(varchar(10),@SalesYTDBySalesPerson); GO ``` ##### Return via result set ```sql USE AdventureWorks2012; GO IF OBJECT_ID('Sales.uspGetEmployeeSalesYTD', 'P') IS NOT NULL DROP PROCEDURE Sales.uspGetEmployeeSalesYTD; GO CREATE PROCEDURE Sales.uspGetEmployeeSalesYTD AS SET NOCOUNT ON; SELECT LastName, SalesYTD FROM Sales.SalesPerson AS sp JOIN HumanResources.vEmployee AS e ON e.BusinessEntityID = sp.BusinessEntityID RETURN GO ``` ### sub queries A sub-query is a nested `SELECT` statement inside of another parent `SELECT` statement. A usage of this is to select a record set from which you then want to manipulate and operate on after that initial pool is generated: ```sql SELECT P.Age FROM ( SELECT P1.* FROM PERSON P1 WHERE P1.Name = 'Paul' ) ``` In this example the first operation performed is the gathering of all records from the `P1` table where the `Name` field contains the string value _Paul_. Then from that result set we are selecting the `Age` field from that initial result set. With sub-queries: A Required `SELECT` statement list A Required `FROM` clause An Optional `WHERE` clause An Optional `GROUP BY` clause An Optional `HAVING` clause But **NO** `ORDER BY` clause this will throw an _error_ Sub queries can be used anywhere an expression is allowed. This includes the `SELECT` portion of a query, the `FROM` clause, the `WHERE` clause, `JOIN`'s, `GROUP BY`'s. ### table aliasing Aliasing a table is to reduce the amount of typing needed for a query. This allows for something like `A_REALLY_LONG_TABLE_NAME` to be reduced to `T` by using an alias: ```sql SELECT T.Age FROM A_REALLY_LONG_TABLE_NAME AS T WHERE T.Age > 55 ``` Aliasing allows you to include fields with the same name but different tables: ```sql SELECT M.Age, F.Age -- 2 Columns called 'Age' are returned because of the alias prefix FROM MENS_AGES AS M JOIN WOMENS_AGES AS F ON F.Marriage_ID = M.Marriage_ID WHERE M.Age > 55 AND F.Age > 55 ``` ### table variables ```sql -- Table variable DECLARE @tablevariable TABLE (customerid [int] NOT NULL PRIMARY KEY, lastorderdate [datetime] NULL); INSERT INTO @tablevariable SELECT customerid, max(orderdate) AS lastorderdate FROM sales.SalesOrderHeader GROUP BY customerid; SELECT * FROM sales.salesorderheader AS soh INNER JOIN @tablevariable AS t ON soh.customerid = t.customerid AND soh.orderdate = t.lastorderdate GO ``` ### Transaction Control Language #### begin transaction **Best Practice** `BEGIN TRAN` marks the beginning of a local transaction and is not recorded until committed. This allows us to rollback any unwanted transactions. Best practice is to use `begin tran` to start especially when using the following dml commands: `INSERT`, `DELETE`, & `UPDATE` ```sql BEGIN TRAN INSERT INTO Database.Schema.Table (<field>) VALUES('<INSERTED VALUE>') SELECT * FROM Database.Schema.Table ROLLBACK COMMIT ``` #### commit `COMMIT` Commits to the `begin transaction` being made. Where as `rollback` reverts the changes back to the original state before modification in the `begin transaction`. #### rollback `ROLLBACK` is used with `begin transaction` to revert changes. If you wanted to keep changes made you would `commit` them. ### the go statement The `GO` Statement signals the end of a batch of Transact-SQL statements to the SQL Server utilities. ```sql USE My_Database SELECT * FROM My_Table GO ``` ### the in predicate Similar to `between predicate`, the `IN` predicate denotes not a range between value A and B but any value in a list that doesnt lend itself to ranging like specific text values or disjointed numerical values: ```sql SELECT Age FROM Person WHERE Age IN (55, 65, 80) -- This matches only to those 3 listed numbers -- OR SELECT Name FROM Person WHERE Name IN ('Bob', 'Joe', 'Mary', 'Sue') -- matching on strings ``` ### the use statement The `USE` statement indicates what database to use in something like a `stored procedures` ```sql USE My_Database <Query Code...> GO ``` ### Top ```sql SELECT TOP 100 <COLUMN> FROM <TABLE> ``` Selects the first `100` records from `<TABLE>` that meet all the criteria but starting from the top of the table scan. it's the first encountered valid records ### union The `UNION` Operator us useful for combining the records from one table to another. There must be: - Same number of columns in both tables - columns must align with each other and be the same data type the operator basically does this: ```sql SELECT ID, Value FROM My_First_Table UNION SELECT ID, Value FROM My_Second_Table ``` #### Example ##### Before | ID | Val | | -- | --- | | 1 | A | | 2 | B | | ID | Val | | -- | --- | | 3 | C | | 4 | D | ##### After | ID | Val | | -- | --- | | 1 | A | | 2 | B | | 3 | C | | 4 | D | ### variables #### local To declare a variable in T-SQL for reuse throughout a query, the syntax is simple: ```sql DECLARE @VariableName AS datatype SET @VariableName = value -- OR DECLARE @VariableName AS datatype = value ``` You can also use a single declare to create multiple variable in one go: ```sql DECLARE @VariableName AS datatype ,@VariableName2 AS datatype ``` #### global ##### Usage `SELECT @@ROWCOUNT` in the same execution block to get the result back ```sql SELECT TOP 1000 * FROM dbo.Customer; SELECT @@ROWCOUNT; ``` This returns the query results and the count of how many rows from that result set ```sql SELECT @@ROWCOUNT; ``` executed by itself only returns a count of 1 record (itself). ##### Error Handling and Business Rules > Using SQL Server `@@ROWCOUNT` for Error Handling and Checking a Business Rule ```sql BEGIN TRAN UPDATE [Sales].[SalesOrderHeader] SET [SubTotal] = [SubTotal] * 1.1; -- 10% increase IF @@ROWCOUNT = 0 PRINT 'Something went wrong!' ELSE PRINT 'Rows were updated...' --COMMIT ROLLBACK ``` ##### Instances of Large ROWCOUNT SQL Server `ROWCOUNT_BIG` function The data type of `@@ROWCOUNT` is integer. In the cases where a higher number of rows are affected than an integer can handle (meaning more than 2,147,483,647 rows!), you need to use the ROWCOUNT_BIG function. This function returns the data type `bigint`. ```sql SELECT TOP 1000 * FROM dbo.Customer; SELECT ROWCOUNT_BIG(); ``` ##### Utilizing ROWCOUNT with Try Catch Statements ```sql BEGIN TRY SELECT TOP 100 * FROM [AdventureWorks2017].[Person].[Person]; END TRY BEGIN CATCH SELECT TOP 50 * FROM [AdventureWorks2017].[Person].[Person]; END CATCH SELECT @@ROWCOUNT; /* @@ROWCOUNT returns zero! This is because the last statement is not the SELECT statement from the TRY block (which has been executed), it’s also not the one from the TRY block as it’s the last SELECT in the script. It’s the TRY/CATCH block itself! @@ROWCOUNT returns the affected rows from any statement, even if it’s not DML or a SELECT query. To avoid this kind of scenario, you can store the row count in a local variable. The script would then look like this: */ DECLARE @rowcount INT; BEGIN TRY SELECT TOP 100 * FROM [AdventureWorks2017].[Person].[Person]; SET @rowcount = @@ROWCOUNT; END TRY BEGIN CATCH SELECT TOP 50 * FROM [AdventureWorks2017].[Person].[Person]; SET @rowcount = @@ROWCOUNT; END CATCH SELECT @rowcount; ``` ### wild cards > A wildcard character is used to substitute one or more characters in a string > > <div class="signature"><a src="https://www.w3schools.com/sql/sql_wildcards.asp">w3schools</a></div> --- | Symbol | Description | Example | | :----- | :-------------------------------------------------- | :--------------------------------------- | | `*` | Represents zero or more characters | `bl*` finds bl, black, blue, and blob | | `?` | Represents a single character | `h?t` finds hot, hat, and hit | | `[]` | Represents any single character within the brackets | `h[oa]t` finds hot and hat, but not hit | | `^` | Represents any character not in the brackets | `h[^oa]t` finds hit, but not hot and hat | | `-` | Represents a range of characters | `c[a-b]t` finds cat and cbt | | `%` | Represents zero or more characters | `bl%` finds bl, black, blue, and blob | | `_` | Represents a single character | `h_t` finds hot, hat, and hit | --- Has to be used in conjunction with the `LIKE` keyword as this is the trigger to run a search using these [[Regular Expressions|regular-expression]] "wild card" serrches. --- - Reference: - [w3Schools](https://www.w3schools.com/sql/sql_wildcards.asp) ### window functions #### first value From ```sql WITH orders AS ( SELECT name, model, year, date_at_lot row_number() over(partition by model, year order by date_at_lot asc) AS order FROM cars ) SELECT name AS oldest_car_name, model, year FROM orders where order = 1 ``` To ```sql SELECT FIRST_VALUE(name) OVER(PARTITION BY model, year ORDER BY date_at_lot ASC) AS oldest_car_name model, year FROM cars ``` #### last value Same usage as `FIRST_VALUE` #### rank over partition ```sql RANK ( ) OVER ( [ partition_by_clause ] order_by_clause ) ``` From <https://docs.microsoft.com/en-us/sql/t-sql/functions/rank-transact-sql?view=sql-server-ver15> **Example:** ```sql RANK() OVER (PARTITION BY PersonID ORDER BY createdDate DESC) AS 'Rank' ``` Partition records based on `personID` this means for each `personID` each record set will be grouped by that then operated on as groupings. Order by `createdDate` date means that for each `personID` group starting at 1 each record will be ranked in descending order based on the `createdDate`