Sunday, September 22, 2019

Update on a view with NULL column

Let's set up a experiment:

CREATE TABLE T 
(
	c1 INT,
	c2 DATETIME2
);

GO

INSERT INTO T
VALUES (1, GETUTCDATE());

GO

CREATE VIEW V AS
SELECT c1, NULL as c2 FROM T;

GO

CREATE TRIGGER VT on V 
INSTEAD OF UPDATE AS

	UPDATE V SET c1 = 1
	FROM V JOIN inserted AS i ON V.c1 = i.c1

GO

Now assume you want to run an update statement through the view and for some reason you want to update in your statement column c2 (Yes I agree this isn't very sensible thing you might want to do).

UPDATE V SET c2 = GETUTCDATE()
WHERE c1 = 1

This fails with error:
Implicit conversion from data type datetime to int is not allowed. Use the CONVERT function to run this query.

How to workaround this? Well the solution is suggested by error message. It seems NULL is interpreted as integer by default. You need to cast this NULL in view to proper data type:

ALTER VIEW V AS
SELECT c1, CAST(NULL AS DATETIME2) AS c2
FROM T

Now the above update will be successful.

Normally you shouldn't have constant columns in a view, which your code tries to update or insert into. Yet I had such situation and I am sure this can happen to others. That's why I wanted to document this peculiarity.

Sunday, August 25, 2019

Who is doing all those batch requests anyway?

Let's say you have a instance which has normally 1000 batch requests/s. Then suddenly it's 10 times more batch requests. It doesn't even need to affect performance in visible way. Yet it's is a sign that something is going wrong somewhere. In case you have many applications, each connecting through different user, you can try to pinpoint which one is it. To do this you can set up such extended event:

CREATE EVENT SESSION NumberOfStatementsByUser ON SERVER 
ADD EVENT sqlserver.sql_batch_completed(
    ACTION(sqlserver.username))
, ADD EVENT sqlserver.rpc_completed(
    ACTION(sqlserver.username))
ADD TARGET package0.histogram(SET source=N'sqlserver.username')
WITH (MAX_MEMORY=4096 KB, EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS, 
        MAX_DISPATCH_LATENCY=30 SECONDS,MAX_EVENT_SIZE=0 KB,
        MEMORY_PARTITION_MODE=NONE,TRACK_CAUSALITY=OFF, STARTUP_STATE=OFF)

After starting it, you can check which user starts most requests with following query:

;WITH
	sql_batch_completed_by_database AS ( 
        SELECT CAST(xet.target_data AS xml) AS event_data
        FROM sys.dm_xe_session_targets AS xet  
        JOIN sys.dm_xe_sessions AS xe  
            ON (xe.address = xet.event_session_address)  
        WHERE xe.name = 'NumberOfStatementsByUser'
    )
    , requests_by_user AS (
        SELECT
              slot.value('(./value)[1]', 'sysname') AS Username
            , slot.value('@count', 'bigint') AS RequestCount
        FROM (SELECT event_data FROM sql_batch_completed_by_database) AS xe(event_data)
        CROSS APPLY event_data.nodes('/HistogramTarget/Slot') AS histogram_target(slot)
    )
SELECT Username, SUM(RequestCount) AS RequestCount
FROM requests_by_user
GROUP BY Username;

Personally, I start this extended event only when I see a problem and only for a minute or so. This is enough to identify the troublesome user. After I identify the user, I usually set up tracing only for him to see what queries he runs. I stop the trace stop as soon as possible. The profiler is quirky and usually runs for few seconds before it can be stopped. This is enough to capture thousands of event. In my experience such things happen when the code being executed goes into infinite loop or user is missing permissions (yes this happened). Having the queries which are being executed should be sufficient information to give to people responsible for the application, so they can start troubleshooting their code.

Tuesday, July 9, 2019

Finding failing queries in SQL Server

Sometimes you know there is a query which fails with error but you don't know how does it look like. For example the application doesn't log it or the query is dynamic one. To track these you can use such Extended Event:

CREATE EVENT SESSION FailingQueries
ON SERVER
    ADD EVENT sqlserver.error_reported
    (ACTION (sqlserver.sql_text,
             sqlserver.database_id,
             sqlserver.username)
     WHERE (   [severity] > 10
               AND error_number <> 942 /*Database '%.*ls' cannot be opened because it is offline.*/
               AND error_number <> 916 /*The server principal "%.*ls" is not able to access the database "%.*ls" under the current security context.*/
               AND error_number <> 18456 /*Login failed for user '%.*ls'.%.*ls%.*ls*/
               AND error_number <> 17830 /*Network error code 0x2746 occurred while establishing a connection; the connection has been closed.*/
               AND error_number <> 4060 /*Cannot open database  requested by the login. The login failed.*/
     ))
    ADD TARGET package0.asynchronous_file_target
    (SET filename = 'G:\FailingQueries.xel', max_file_size = 5, max_rollover_files = 2)
WITH (MAX_DISPATCH_LATENCY = 5 SECONDS);
GO

(Don't forget to set your own path)

This event checks for error severity higher than 10 (anything below is informational) and also filters out these I found useless and quite chatty in practice. For example, if some developer connects with SSMS to instance on which he doesn't have access to all databases, said SSMS will try to open these databases anyway.

You can check data captured with following SQL:

DECLARE @filename NVARCHAR(max)
SELECT @filename = REPLACE(CAST(sesf.value AS NVARCHAR(MAX)), '.xel', '') 
FROM sys.server_event_session_fields sesf
JOIN sys.server_event_sessions ses ON ses.event_session_id = sesf.event_session_id
WHERE sesf.name = 'filename'
AND ses.name = 'FailingQueries'

;WITH cte
AS (SELECT CONVERT(XML, event_data) ed
    FROM sys.fn_xe_file_target_read_file(@filename + '*.xel', NULL, NULL, NULL) ),
shredded_data AS (
SELECT DATEADD(
           mi, DATEDIFF(mi, GETUTCDATE(), GETDATE()), ed.value('(event/@timestamp)[1]', 'datetime2(3)')) AS [timestamp],
       DB_NAME(ed.value('(event/action[@name="database_id"]/value)[1]', 'nvarchar(128)')) AS database_name,
       ed.value('(event/data[@name="severity"]/value)[1]', 'tinyint') AS [err_severity],
       ed.value('(event/data[@name="error_number"]/value)[1]', 'int') AS [err_number],
       ed.value('(event/data[@name="message"]/value)[1]', 'nvarchar(512)') AS [err_message],
       ed.value('(event/action[@name="sql_text"]/value)[1]', 'nvarchar(max)') AS [sql_text],
       ed.value('(event/action[@name="username"]/value)[1]', 'nvarchar(128)') AS username
FROM cte)
SELECT * FROM shredded_data sd
--WHERE sd.database_name = 'YourDatabase'
ORDER BY timestamp;

I like to have this Extended Event enabled whole time on development servers. That way if developers come to me saying they have some errors but don't know what SQL is executed, I can just check the event data. I've also used similar one in production.