Synthetic table\keys in QlikView – and how to avoid it

I know this is a very basic thing in QlikView, and people who do work on Qlikview must have already experienced synthetic keys. But, since this is my first article on QV & this is something I came across with initially when I started with QV and frankly speaking it took me significant time to understand it completely and therefore thought of putting it here again in very simple words.

By definition- ‘Synthetic keys occur when two or more tables have two or more fields in common.

What does this mean – If we have two or more tables with more than one common field (as by name – remember QV is case sensitive) then association between them is automatically formed by means of a composite key containing all those common fields. And this in QV is handled by creating an additional table called as synthetic table having all those common fields & distinct combination/values along with a primary key called as Synthetic Key (shown as $Syn) and the data tables are associated with that synthetic key in there.

SyntheticKey_QV_1

 

 

 

 

 

 

So, in the above example, Name & RowStatus are the common fields and hence synthetic table & key is formed on the same i.e. a unique combination will be identified by these two fields. Now, let’s look at the data table (table box object) with all of the four fields.

SyntheticKey_QV_2

 

 

 

This sounds quite actually right since this maps two tables based upon the common fields i.e. the composite key with no common record and serving the purpose well. But, this is not the case. Synthetic keys are not considered as that good and data model is considered to be of a bad design in this case.

There are various beliefs around whether generation of synthetic keys is always a bad choice or we should focus more on correcting our data model. So, at this point, we know why this synthetic table/key is formed by QV automatically but the question “Is this actually an error & always bad to have?” is still open.

Well, synthetic keys are ok to have in case of smaller datasets but if dataset is significantly large and/or there are many such common fields participating into synthetic keys then this will definitely give some performance implications and may also result memory issues. Hence, wherever possible we should avoid these.

However, having said that It’s important to look at the particular scenario/requirement and decide if we really want to get rid of the synthetic key. Here in the above example, these common appearing fields are not exactly the same & serving the same purpose. And hence, they should be avoided to form a synthetic key.

There are various ways to avoid synthetic table/key as follows, although it depends upon the requirement which one among these to make use of.

1.       By removing those common fields: if we these common fields causing the synthetic keys are not required, we should remove/comments them out.

2.      Renaming / Aliasing the fields: If fields common between the two tables are actually the different fields, serving the different purpose, these should be renamed (by providing some different aliases using ‘As’ clause) as per their table.

E.g. In the above scenario -
Product : ProductID, Name As ProductName, RowStatus
Client:      ClientID, Name As ClientName, RowStatus
(RowStatus can also be renamed further to ProductRowStatus & ClientRowStatus respectively)

SyntheticKey_QV_3

 

These two tables are automatically connected over here, this is the associative nature of QlikView.

3.       Using Qualify: Another approach similar to renaming the fields is specifying the Qualify keyword. This enforces all the selected fields to have the fully qualified name in the format of TableName.FieldName

This is achieved by specifying Qualify *; or Qualify column-names (if some of the selected fields to be specified). And we need to exclude a few of them from having fully qualified name for certain reasons then use ‘UnQualify’ keyword.

4.      Joining of tables:  We can also make use of JOIN and explicitly join these two tables.

SyntheticKey_QV_4

<Difference between aliasing and joining of tables is that in case of Join, single table is created.>

5.       Concatenating two tables: Two tables are concatenated in QlikView when number of fields in both the tables and their names are same. Order of the fields may be different though.

SyntheticKey_QV_5

 More about concatenation in QV –   http://www.learnallbi.com/concatenate-and-noconcatenate-in-qlikview-part-1/

6.       Using complex/composite key:  As we have seen in above examples, synthetic table contains a synthetic key that is a composite of all the combinations of the multiple key fields connecting the tables. This synthetic field, denoted again by the $Syn symbol, is also placed in the original data fields connected by multiple fields. This new synthetic field is called a synthetic key. So, if we already join those two or more than two fields causing a synthetic table by creating a complex key like ‘ProductId_Name’ (productID & ‘_’  & Name) and thus having only one combined field/key instead of two separate fields.

7.      Creating a Link table: A key/link table is frequently required in QlikView to resolve Synthetic Key or Circular Join issues. Creation & usage of a Link table is a broader topic to discuss, hence we will talk about this in detail in next article.

Posted in Uncategorized | Tagged , , | Leave a comment

How to prevent SSIS package creating empty flat file at the destination

It’s very common that SSIS package is developed to generate some output as Flat File (.csv, .dat, .txt etc) by using Flat File Destination component. Content of the same can be sourced from either SQL Server database, some other flat file, excel source & so on.

Designing such an SSIS package is simple and straight forward. But, we get to see that when we run this package, it generates the output (flat) file at the configured destination folder even if there is no record fetched from the source.

Although, this is working as expected but we hardly need to generate the empty (only with the headers) file as the output file. Hence, we would like to put some condition in between somewhere to prevent this SSIS package not to create such an empty file.

And, when we think of some solution to achieve this, we come across something like as following

  1. Deleting the empty output file at the end of the package; if size of the file is 0KB or less than 1KB or so depending upon the size of the header row (if configured to be inserted).
  2. Capturing the record count of the total records being inserted into the file by means of ‘RowCount’ transformation and then using a condition into ‘Conditional Split’ transformation which will produce the output only if RecordCount > 0
  3. Calculating the record count before the ‘Data Flow Task’ or before actually inserting the records into the flat file by means of ‘ScriptTask’ or so and to execute the DFT only if RecordCount > 0.

..…there may be a few more workarounds for this.

Now let’s see which one of these above approaches is best suitable for us.

Approach #1, is not a good one since we are first allowing the file to be generated and then deleting the same by looking at its size. This may have already triggered some other process/job which might be dependent upon the existence of this output file.

Approach #2, although this sounds very promising, but when we try this; it doesn’t work :( . And, to prove this, below is what I have created.

In this package, source is having a False select statement i.e. no record would actually be selected out of this. Rowcount is configured to fetch the numbers of rows to be inserted into the destination into a variable ‘user::RecordCount’ and then a conditionalSplit to work on a condition of @RecordCount > 0.

SSIS_OutputFile_1

As was suspected, this package has still created an empty file (with headers) at the destination folder. Hence, not a working solution either.

Approach #3, finally we need to put a condition onto the Data Flow Task (DFT) itself in the control flow page. There can be a simple scenario where Source is some database like SQL Server & we are able to pre-calculate the count of the records being fetched from the source (into a variable RecordCount) by executing that particular query (be it a select query or some stored procedure etc) using Execute SQL task or so. And, therefore in this case structure can be something like as below:

SSIS_OutputFile_2

 

 

 

 

 

 

 

This will allow executing the DFT only if there is some records found to be inserted into the output file and hence no empty flat file will be created.

Now, there can be a little complex scenario where source query is not that straight forward to be reused to pre-calculate the record count whereas the records being inserted into the output file are coming from various logics into data flow (some other type of source like excel etc and by applying certain set of transformations) and hence in this case above solution might not work.

Therefore in that case, we would need to capture the record count in the Data Flow task itself once all the logics/transformations etc are applied.

This can be achieved in two ways:

  1. If record set is small, then we can make use of RecordSet destination in the data flow task along with capturing the record count into a variable and then in the control flow check if RecordCount>0 then execute another DFT to insert the records from that RecordSet object into the output file.
  2. If record set is quite large, then in the main DFT get the output into a working file (exactly the same as that of the main output file, but a temp file) along with capturing the record count into a variable and then in the control flow check for the value of this RecordCount variable and if RecordCount>0 then by means of another DFT generate the actual output file from that working file already present. And at the end of the package, delete that working file.
Posted in SQL Server, SQL Server 2008, SSIS | Leave a comment

Autonomous Error/Event Logging into SQL Server

Error logging or custom logging is a very common requirement while working with any database & its operations; irrespective of the database server type i.e. be it Oracle or SQL Server database. And, when we talk about SQL Server, it gives great features to deal with the errors/exceptions by means of TRY & CATCH blocks into the SQL Stored Procedures and performing the operations inside some Transactions.

So, pseudocode for the basic structure of any exception handling inside a Stored Procedure is like as below –

ErrorLogging_1

This looks very promising, appears to be handling almost all of the exceptions & transactions related requirements and a prompt answer if anybody is asked about error logging in SQL Server database.

As mentioned above, in case of any exception the current transaction will be rolled back & then we can make it to perform certain other required auditing operations (inside the catch block) like logging the custom error into some T_Error table and then finally raising/throwing that error to the calling code. However, when we talk about the event logging which is not constrained to just the error logging but all the events we want to capture during the execution of various statements inside a stored procedure e.g. at the beginning of the stored procedure, at the start of some particular statement, at the end of the procedure etc. And to implement the same we can probably create an EventLogging procedure (may be working inside its own transaction) which can be called at any step, as stated above, inside a stored procedure, and also in the catch block to log the error in case of any exception. All good so far.

Let’s make the situation a bit more complicated. If we look at the code block depicted above more carefully, we would get a question in our mind as to what will happen if we have done loads of custom logging inside a stored procedure & then suddenly we get some unwanted exception and at the end everything is rolled back – everything includes logging done as well :(.  Moreover, if this procedure is being executed inside some other parent procedure and in case of any exception into this child proc, it will not only rollback the child proc’s transaction (including logging done) but also of the parent proc. To understand this, one needs to be aware of the fact that the outermost commit is what controls the inner commits and any of inner rollback will rollback all the whole transaction.

So, at this point we have understood the problem & the risk that the event/error logging done inside a stored proc can be lost at the end if we rely on the transactions opened by the current or the parent proc. Hence, you all would agree that we should be doing something to operate inside some independent/autonomous transaction while logging so that error/event messages are logged despite the transaction is doing a rollback. But the bad news here is that In Microsoft SQL Server, there is no direct equivalent for this. There is a difference between nested and autonomous transaction.

In Oracle there is a straight forward way of doing this by means of Pragma Directive, but in SQL Server we need to go for some workarounds to achieve Autonomous Event/Error logging like using loopback approach which is about setting up a linked server & RPC call, but not very efficient in nature and another approach is using the CLR stored procedure – a recommended approach.

In this article, we would achieve Autonomous logging into SQL Server using CLR Stored Procedure. Here, we wouldn’t be going into the detailed working of SQLCLR stored procedures.

  1. Let’s assume we have a SQL stored procedure ‘usp_EventLog’ written for logging the events (start, info, error, end etc) into eventLog table (say dbo.T_EventLog).
  2. This proc has got its own transaction – commit & rollback
  3. We can pass EventType (ID), id of the calling parent proc and the message etc to be logged into the EventLog table.

Now, create a CLR stored procedure (using visual studio) which will act as a wrapper and will execute the SQL SProc in a separate connection i.e. an autonomous logging. We are setting it up for two parameters at the moment, CallingProc’s name (string) EventLogTypeID (Int), EventMessage (String). These parameters should in line with the underlying SQL procedure.

String datatype of CLR procedure maps to NVARCHAR of the SQL Server datatype.

ErrorLogging_2

This CLR stored proc needs to be registered into SQL Server assembly and then a SQL procedure needs to be defined to make a call this assembly proc or this CLR procedure can directly be deployed using an option into Visual Studio itself.

So, finally we get to create another SQL procedure named ‘dbo.usp_CLR_AutonomousLogging’ which will in turn refer the CLR assembly. And while event logging inside any SQL procedure etc, this proc should be called passing the required parameters. This will ensure to be executed in a separate autonomous transaction and event logging will never be rolled back even if the calling proc fails.

Make sure CLR execution is configured into the SQL Server, if not then below script can be run to enable it.

ErrorLogging_3

…calling the CLR proc

ErrorLogging_4

Posted in Uncategorized | Tagged , , | Leave a comment

Database Partitioning into SQL Server / Data Archiving using partitioning mechanism (Part 2)

In part1 of Database partitioning we talked about why we need it and how to implement it into SQL Server database. Also, we have seen how to add new range & filegroups to the existing partitions. Now, in this article, we would be talking about the next bit of it where we feel the need of getting rid of the existing/empty partitions.

So, before jumping onto how to achieve this, let’s see when we get around with an empty partition or we can say why we need to remove some existing partition. In other words, there is a very useful application of the database partitioning in achieving the database archiving.

Database Archiving / Moving data between tables:   Now a days, it has become very useful but no expensive for keeping the older records for long for many analysis purposes. But, from the database perspective, we generally maintain the older data in terms of a separate ‘archive/history’ table and the current data (based upon some date or so) in the main table. Now, moving data from the main table to the archive table on timely basis (mostly daily) can be an expensive affair if the data volume to be moved is large enough to give a significant performance hit & can be time consuming too if we copy data from main to archive table using simple select/insert & delete statements; even if being done using smaller batches.

And this is where database partitioning proves to be an amazing approach.

First, let’s see how data is distributed across all the existing partitions into our table (dbo.Fact_Orders).

DatabaseArchiving_1

As you can see that there are 6 partitions at the moment and the oldest partition #1 which contains data for the first quarter of 2014 and having 55 rows is the recordset we need to move to the archive table and then to remove the same from this main table.

So, basically there are two parts to it, first is to move the data from main to the archive/history table i.e. SWITCH PARTITION and second is to delete this data/partition from the main table i.e. MERGE RANGE. We will discuss these one by one.

SWITCH: There can be two types of data movement. Either moving data from the partitioned table (main table) to a non-partitioned table (archive/history) table or moving data from some other partitioned table into the main partitioned table (we can say that both main as well the archive/history tables are partitioned here).

Let’s first focus on our requirement of moving data of the oldest partition from the main (partitioned) table to the history table (a non-partitioned table) i.e. ‘Switching Data Out’.

Assume there is a history table ‘dbo.Fact_Orders_Historywith exactly the same schema as that of the main table – ‘dbo.Fact_Orders’.

Now, we need to attach one partition (partition number 1) of dbo.Fact_Orders to the history table – dbo.Fact_Orders_History i.e. all the data from partition#1 to be moved into the history table. This is achieved using ALTER TABLE command.

SwitchPartition_1

–as of now no record is there in the history table.

SwitchPartition_5

–All 55 records from partition#1 are now moved to history table now.

Also, look at the partition distribution of the main table, there is no record into the partition #1 i.e. an empty partition.

SwitchPartition_4

Likewise, we can do ‘Switch Data In’ i.e. moving data from a partitioned table to another partitioned table. Let’s take an example where our archive table is also a partitioned table. So, basically we want to move the latest partition from the main table to the archive table and then deleting the oldest one from the archive table.

Again, this can be achieved using ALTER TABLE command. The only difference here is that earlier (while Switching Data Out) we did move data to a non-partitioned table and hence no check was actually required, but since now we are trying to attach one partition of a table to another partitioned table by specifying the partition number, data being moved should adhere to the partition number of the table data is being moved into.

Therefore, there must be some check constraint on the partitioned column of the main table as per the definition of the partition it is moved into of the archive table.

SwitchPartition_6

SwitchPartition_7

This will also move all the data from the main table to the archive table’s 5th partition.

And, in both the cases, main table’s partition will be left empty which is of no use and hence should be removed. This is achieved using MERGE RANGE command.

MERGE RANGE: This is to remove some existing partitions i.e. it drops a partition and merges any values that exist in the partition into one of the remaining partitions.  Therefore it is very useful to get rid of an empty or not in use partition. Merging partitions is self-explanatory-take two partitions and make them one.

If the two partitions to be merged are empty, the operation is instantaneous (no I/O involved). If the partitions aren’t empty, data from one of the partitions is physically moved to the other (remember that each partition resides on a certain filegroup). Hence it’s advisable to empty/switch the partition not in use and then merge it.

MergePartition_1

 

 

Now, look at the data distribution again…

MergePartition_2

Partition#1, the empty one, has been removed now.

Summary:

  1. It’s very useful in case of larger data sets to implement the database partitioning
  2. Helps in moving data chunks from one partitioned table to another partitioned/non-partitioned table
  3. After moving the data, empty partition should be removed using MERGE RANGE and then a new partition can be added (into the main table to allow new records to be inserted) using SPLIT RANGE.

 

Posted in SQL Server, SQL Server 2008 | Leave a comment

Database Partitioning into SQL Server (Part 1)

While working on large database/tables we come across several issues – may be queries running slow i.e. data performance issues, time to perform certain maintenance operations and replicating/deleting data from these tables.

And to deal with these tailbacks, we think of data partitioning as the easiest and best solution. It provides the means to effectively manage and scale your data at a time when tables are growing exponentially.

But to implement this into real scenarios, we not only need to give good amount of thought around like which tables we should be applying this to, granularity of the data to be partitioned, whether indexes to be partitioned or not etc., but also need to take into consideration the fact whether it would be any harm doing so.

In this article we gonna discuss about its basic needs, high level implementation and talk about the scenarios it would really be beneficial to implement the same into.

Purpose/Benefits of data partitioning:

  • As stated above, the basic necessity of data partitioning arises when size of the data tables becomes quite large and it’s being experienced that queries on these tables running very slow due to the same. And then by means of the partitions we can think of splitting the whole table into a few smaller ones & make the queries run faster and performs data sorting for I/O operations much faster.
  • Further in case of the larger tables, other maintenance activities like index rebuild, compression, statistics update etc. also becomes significantly slower and hence performing the same on partitions make it easier.
  • Helps with lower locks if different partitions being inserted, updated, deleted or selected in different transactions.
  • It also help to transfer or access subsets of data quickly and efficiently, while maintaining the integrity of a data collection.

Overview:

When we create a simple data table into SQL Server, a partition is automatically created for the same i.e. until & unless defined, whole data is stored into one partition & hence onto one FileGroup. When partitions are explicitly created, the data is partitioned horizontally, so that groups of rows are mapped into individual partitions accordingly. The best part is that table or index is treated as a single logical entity when queries or updates are performed on the data i.e. it’s completely transparent to applications (as long as you don’t have to change primary & foreign keys): they don’t have to know the table is even partitioned.

Partitioned tables and indexes support all the properties and features associated with designing and querying standard tables and indexes, including constraints, defaults, identity and timestamp values, and triggers.

All partitions of a single index or table must reside in the same database.

Deciding factors to implement or not to partitioning

  • Biggest reason to partition the data table is if table contains or is expected to contain lots of data that are used in different ways, probably in different queries. And more importantly we have some field on the basis of which we can partition the data into different partitions e.g. in a large fact table if we have date which can be partition upon for different years to make the queries run faster being executed for different years.
  • Queries or updates against the table are not performing as intended, or maintenance costs exceed predefined maintenance periods.
  • Frequent lock escalation issue at table level.
  • Data Archival: In scenarios like loading latest data into data warehouse where table being accessed heavily at the same time and also we need to switch off the oldest data from that table, partitioning becomes very useful.
  • A big question to ask yourself if your table is really big enough to be partitioned? If not, then it would be an overhead of partition management rather an advantage. Also, we would need the Enterprise edition of the SQL Server to be able to implement the same.

Basic Implementation of partitioning

There can be different scenarios/ requirements and approaches:

  1. Partitioning a new table being created
  2. Partitioning the existing table
  3. Transfer / switch partition of a partitioned table.

Partitioning a new table being created

There are a few steps to create a partitioned table as follows:

Partition Function: When we say we want to partition a particular table, we actually mean to store the whole dataset into smaller chunks. And hence, we need to define how to make those data chunks. There comes Partition Function into the picture.

So, Partition Function, is a SQL object that defines how the rows of a table or index are mapped to a set of partitions based on the values of certain column, called a partitioning column (e.g. a datetime column) and how the boundaries of the partitions are defined. And this essentially defines how many partition are going to be there for a particular table.

Computed columns that participate in a partition function must be explicitly marked PERSISTED. All data types that are valid for use as index columns can be used as a partitioning column, except timestamp. The ntext, text, image, xml, varchar(max), nvarchar(max), or varbinary(max) data types cannot be specified

Partition Scheme: As mentioned earlier, different partitions can be stored differently across various filegroups, a Partition Scheme allows to do that. So, it is a database object that maps the partitions of a partition function to a set of filegroups. The primary reason for placing your partitions on separate filegroups is to make sure that you can independently perform backup operations on partitions.

Now, let’s take an example to define all these objects and apply them onto a data table to make it partitioned table.

e.g. A fact table ‘dbo.Fact_Orders’ where we receive lots of order transactions (OrderId, Amount, SalespersonID, OrderDate) and this is being queried based upon orderdate and hence we need to partition it upon OrderDate.

Create Partition Function:

PF1

This partition function partitions a table into 4 partitions, one for each quarter of a year 2014 worth of values in a datetime column.

‘RIGHT’ keyword defines the boundaries that partitions will have records on the basis of OrderDate (or any column this partition function will be applied upon) like

Partition1 -> OrderDate < ‘01Jan2014’

Partition2 -> OrderDate >= ‘01Jan2014’ AND OrderDate < ‘01Apr2014’

Partition3 -> OrderDate >= ‘01Apr2014’ AND OrderDate < ‘01Jul2014’

Partition4 -> OrderDate >= ‘01Jul2014’ AND OrderDate < ‘01Oct2014’

Partition5 -> OrderDate > ‘01Oct2014’

As shown right most partition ‘Partition5′ will contain all the values >’01Oct2014′.  LEFT is the default value unless specified.

Now if we want to check which partition a particular record (a date) resides into:

PF2

 

 

 

Create Partition Scheme:

PS1

This will make it to store all the partitions on a single filegroup i.e. the primary filegroup.

If we want different partitions to be stored into different filegroups, we need to specify them explicitly as below:

PS2

 

 

These filegroups need to be pre-defined. If extra filegroup is specified then that will be used for next used partition. We will talk about later how to add new partitions to the existing function.

Now while creating the table we can assign this partition scheme (& thus the partition function) to the table by means of the partitioning column and hence table structure will be inline to this portioning strategy.

PTable

ALTER Partition:

As we have defined certain partitions for our fact table based upon the current requirement i.e. for the year 2014. Now, in the next year, let’s say we need to add some more partitions. It sounds like what we want to do is to create a new boundary for our existing partitioning implementation e.g. add another partition for Q1 of 2015. Also, we will later talk about how to remove an existing obsolete partition which is no more required.

And this can be implemented via ALTER PARTITION FUNCTION statement as follows:

SPLIT RANGE: This adds a new partition boundary to the existing partition function by splitting one the existing ranges into two.

PF3

 

 

This will work absolutely fine because we have already got an additional filegroup left unassigned to accommodate a new partition. So, when we add filegroups to a partition scheme and mention some extra filegroup at the end which will automatically be marked as NEXT USED i.e. to be used for the next partition to be added and hence nothing else needs to be done.

But, if we haven’t defined any extra filegroup to the partition scheme, then before splitting the range of a partition function, we would need to alter the partition scheme as below:

PS3

 

 

Or in case Primary filegroup to be used:

PS4

 

Posted in SQL Server, SQL Server 2008 | 1 Comment

DELETE vs TRUNCATE – SQL Server

Although this is a very simple question that ‘what is the difference between using DELETE vs TRUNCATE statements in SQL Server’, & both are being used very frequently serving more or less the same purposes, but there are various aspects we should keep in mind before using them more effectively. And I am sure the below explanation would make you think before using DELETE vs TRUNCATE next time –which most of us do use one of these seeing as an alternate to the other.

Basic differences in terms of usage:

  1. TRUNCATE deletes the entire table data. DELETE can be used to delete either entire table data or selected table records by using WHERE clause
  1. TRUNCATE does reset the SEED value of Identity column to the default value (starting value) whereas DELETE doesn’t do that
  1. Truncate doesn’t invoke Triggers whereas Delete does
  1. Truncate is DDL (Data Definition Language) and Delete is DML (Data Manipulation Language)
  1. TRUNCATE TABLE cannot used be used when a foreign key references the table to be truncated. If all table rows need to be deleted using TRUNCATE and there is a foreign key referencing the table, you must drop the index and recreate it. However, DELETES just needs all the referenced rows/data to be removed first.

6. Truncates need db_owner and db_ddladmin permission.

In terms of Performance – internal working

  1. TRUNCATE is faster than DELETE – because Truncate needs locks on the table and schema but do not need locks on rows of the tables as in case of DELETE.

TRUNCATE TABLE is a statement that quickly deletes all records in a table by de-allocating the data pages used by the table. This reduces the resource overhead of logging the deletions, as well as the number of locks acquired; however, it bypasses the transaction log, and the only record of the truncation in the transaction logs is the page de-allocation. Records removed by the TRUNCATE TABLE statement cannot be restored. The unhooked pages/data will be removed synchronously or asynchronously (called as deferred de-allocation) based upon whether the data table is small enough or quite large respectively.

Whereas DELETE TABLE statements delete rows one at a time, logging each row in the transaction log, as well as maintaining log sequence number (LSN) information. Although this consumes more database resources and locks, these transactions can be rolled back if necessary.

A FEW MYTHS HERE……

Myth1: Truncate cannot be rolled back.

Fact: That’s not true. Truncate Table command is transactional. It can be rolled back – if it happens within an explicit transaction.

Let me reiterate it to illustrate the reality….TRUNCATE can be rolled back only if

  1. Explicit Transaction(s) are opened, i.e. by recording which pages and extents were de-allocated, there’s enough information to roll back, by just RE-allocating those pages later. And
  2. Current Session is not closed which means truncated data can’t be rolled back by means of the Log Files even if database is set to Full Recovery mode then.

On the other hand DELETE can always be rolled back.

A small question to be answered here is – How does SQL Server know not to reuse the pages that belonged to the table? It turns out the pages and/or extents involved are locked with an eXclusive lock, and just like all X locks, they are held until the end of the transaction.  And as long as the pages or extents are locked, they can’t be deallocated, and certainly cannot be reused.

Myth2: Truncate generate less log records. It depends. If table is small enough, truncating table will generate more logs.

Up to this point it’s fairly clear what should be used as per the requirement. Now, last but a substantial point also to be considered as explained below.

Till this point it seems TRUNCATE is a better statement over DELETE, but wait a minute. This may not always hold to be a true statement. Given a scenario where a large table is to be deleted for the existing records (may be done via Partition switch/merge etc) & at the same time data is being selected/inserted from/into the same table.

As stated above, Truncate statement takes an exclusive lock (perhaps app_lock_Mutex )on the entire table – and in case of the larger table, it is not accessible during that time – it results into a deadlock. And hence it is advisable to use DELETE, but perform the deletes into smaller batches.

Similar to this, sometimes deadlock is also encountered due to the following:

Each temporary table has an in-memory structure that contains a counter of all the pending transactions that operated on the table. When this counter decreases to 0, the temporary table is dropped in an autonomous transaction. However, the TRUNCATE TABLE statement does not increase the counter. Therefore, if the autonomous transaction tries to drop the temporary table before the transaction that runs the TRUNCATE TABLE statement commits, a deadlock occurs. The deadlock occurs between the autonomous transaction and the transaction that runs the TRUNCATE TABLE statement.

Posted in SQL Server, SQL Server 2008 | Leave a comment

Replicate/Migrate data from one database/table to another – Part 3

As we discussed in Part 1/Part 2 about the very simple approaches of migrating the changes from one data table to another, but depending upon the situations, data volume etc., there are a few improvements needed in those approaches.
In this article, we will talk about those limitations of the first approaches and will go through another one taking care of those limitations.

Limitations of part1/part2:

  1. Large data volume of source/target table(s): In case the table being merged are carrying loads of data, running merge joins on these table will be slower. And in case, there are not so many changes expected at a time, it would again be an overhead of running the checks on the complete data set.
  2. Large number of changing attributes: In case of tables having good number of columns (say 100 columns) and most of them are potential changing attributes i.e. any of those so many fields can change (from source to target), but we are not sure which one is changed while merging them. And therefore, running OR condition (as mentioned in Part1) to check if there is any change or not, may be so slow that it can result a severe DB server load.

Better approach in case of situations listed above….

  1. Dealing with large data tables:

If we are expecting not so many changes at a time (from the time merge process is run last), it‘s always good to identify the changed (new, updated) records using some timestamp column.

E.g. Assume if we have a ModifiedDate column into both source & destination table. Then at the beginning of the merge process, first identify the records modified, inserted newly or so; after the last run/merge.

(Note: Here we are assuming that source table is not truncated & reloaded full, otherwise we would need to perform the merge check on complete data-set)

  1. Get the last run date (modifiedDate) – this can be either be achieved by fetching the MAX(ModifiedDate) from the target table or when merge operation is being performed the current datetime can be stored in terms of some configuration table or so & then to be used while the next run.
  2. Get all the records from the source table – either into a temp table or so

WHERE Source.ModifiedDate > MAX_ModifiedDate

  1. Perform the merge operations either using MERGE statement or by means of SQL JOINS from the source’s temp table (with records > Max_ModifiedDate) to the target for the matching records (upon the joining keys).
  2. Update the target’s modifieddate and/or update the configuration date with the current date of the operation being performed at.

This will cut down the whole process to a limited numbers of records which are new or updated after the last run.

N.B. In case of the soft delete, UPDATE operation will take care of the same. But for the hard delete operation (where records are physically deleted), we would still need to do it on the complete table.

1. Find the changed records more efficiently: Now as mentioned in the previous approaches, when we tried to find out which records have actually changed so that update operation can be performed upon, we used OR statements e.g.

WHERE Source.Col1 <> Target.Col2

OR

Source.Col2 <> Target.Col2

……..

This would be a very slow affair if change is to be looked at very large number of attributes/columns.

There are a few other solutions to deal with it…as to find out the changed records in a more efficient way.

  1. By using INTERSECT
  2. By using EXCEPT

Usage of these both the clauses are more or less similar. We will take an example of INTERSECT here.

  1. First find out the matching records using INTERSECT

–Take the changed records from source table into a temp table – after the last run

SELECT * INTO #T_SourceEmp

FROM dbo.T_SourceEmployee

WHERE ModifiedDate > MAX_ModifiedDate –MAX(target.ModifiedDate) into a variable

–Get the intersected date

SELECT * INTO #CommonData

FROM

(

SELECT

EmployeeID

,EmployeeName

,DateOfJoining

,Designation

,ManagerName

FROM #T_SourceEmp

INTERSECT

SELECT

trg.EmployeeID

,trg.EmployeeName

,trg.DateOfJoining

,trg.Designation

,trg.ManagerName

FROM dbo.T_DestinationEmployee trg

INNER JOIN #T_SourceEmp src

ON trg.EmployeeID = src.EmployeeID

)

So, #CommonData contains the records which don’t have any difference from source to destination table (although their modifieddate was changed) and thus to be ignored.

And records which are updated on source but are not into #CommonData table are to be identified as these would be the actual changes.

  1. Find the updated records and perform the update operation:

SELECT src.* INTO #T_Updates

FROM dbo.T_SourceEmployee src

LEFT JOIN #CommonData CD ON src.EmployeeID = CD.EmployeeID

WHERE

ModifiedDate > MAX_ModifiedDate –MAX(target.ModifiedDate) into a variable

AND CD.EmployeeID IS NULL  –i.e. not existing as common/unchanged data

UPDATE trg

SET  trg.EmployeeName = upd.EmployeeName

,trg.DateOfJoining = upd.DateOfJoining

,trg.Designation = upd.Designation

,trg.ManagerName = upd.ManagerName

FROM dbo.T_DestinationEmployee trg

INNER JOIN #T_Updates upd

ON trg.EmployeeID = upd.EmployeeID

 

Posted in SQL Server, SQL Server 2008 | Tagged , | Leave a comment