Database Partitioning into SQL Server – Part1

While working on large database/tables we come across several issues – may be queries running slow i.e. data performance issues, time to perform certain maintenance operations and replicating/deleting data from these tables.

And to deal with these tailbacks, we think of data partitioning as the easiest and best solution. It provides the means to effectively manage and scale your data at a time when tables are growing exponentially.

But to implement this into real scenarios, we not only need to give good amount of thought around like which tables we should be applying this to, granularity of the data to be partitioned, whether indexes to be partitioned or not etc., but also need to take into consideration the fact whether it would be any harm doing so.

In this article we gonna discuss about its basic needs, high level implementation and talk about the scenarios it would really be beneficial to implement the same into.

Purpose/Benefits of data partitioning:

  • As stated above, the basic necessity of data partitioning arises when size of the data tables becomes quite large and it’s being experienced that queries on these tables running very slow due to the same. And then by means of the partitions we can think of splitting the whole table into a few smaller ones & make the queries run faster and performs data sorting for I/O operations much faster.
  • Further in case of the larger tables, other maintenance activities like index rebuild, compression, statistics update etc. also becomes significantly slower and hence performing the same on partitions make it easier.
  • Helps with lower locks if different partitions being inserted, updated, deleted or selected in different transactions.
  • It also help to transfer or access subsets of data quickly and efficiently, while maintaining the integrity of a data collection.

Overview:

When we create a simple data table into SQL Server, a partition is automatically created for the same i.e. until & unless defined, whole data is stored into one partition & hence onto one FileGroup. When partitions are explicitly created, the data is partitioned horizontally, so that groups of rows are mapped into individual partitions accordingly. The best part is that table or index is treated as a single logical entity when queries or updates are performed on the data i.e. it’s completely transparent to applications (as long as you don’t have to change primary & foreign keys): they don’t have to know the table is even partitioned.

Partitioned tables and indexes support all the properties and features associated with designing and querying standard tables and indexes, including constraints, defaults, identity and timestamp values, and triggers.

All partitions of a single index or table must reside in the same database.

Deciding factors to implement or not to partitioning

  • Biggest reason to partition the data table is if table contains or is expected to contain lots of data that are used in different ways, probably in different queries. And more importantly we have some field on the basis of which we can partition the data into different partitions e.g. in a large fact table if we have date which can be partition upon for different years to make the queries run faster being executed for different years.
  • Queries or updates against the table are not performing as intended, or maintenance costs exceed predefined maintenance periods.
  • Frequent lock escalation issue at table level.
  • Data Archival: In scenarios like loading latest data into data warehouse where table being accessed heavily at the same time and also we need to switch off the oldest data from that table, partitioning becomes very useful.
  • A big question to ask yourself if your table is really big enough to be partitioned? If not, then it would be an overhead of partition management rather an advantage. Also, we would need the Enterprise edition of the SQL Server to be able to implement the same.

Basic Implementation of partitioning

There can be different scenarios/ requirements and approaches:

  1. Partitioning a new table being created
  2. Partitioning the existing table
  3. Transfer / switch partition of a partitioned table.

Partitioning a new table being created

There are a few steps to create a partitioned table as follows:

Partition Function: When we say we want to partition a particular table, we actually mean to store the whole dataset into smaller chunks. And hence, we need to define how to make those data chunks. There comes Partition Function into the picture.

So, Partition Function, is a SQL object that defines how the rows of a table or index are mapped to a set of partitions based on the values of certain column, called a partitioning column (e.g. a datetime column) and how the boundaries of the partitions are defined. And this essentially defines how many partition are going to be there for a particular table.

Computed columns that participate in a partition function must be explicitly marked PERSISTED. All data types that are valid for use as index columns can be used as a partitioning column, except timestamp. The ntext, text, image, xml, varchar(max), nvarchar(max), or varbinary(max) data types cannot be specified

Partition Scheme: As mentioned earlier, different partitions can be stored differently across various filegroups, a Partition Scheme allows to do that. So, it is a database object that maps the partitions of a partition function to a set of filegroups. The primary reason for placing your partitions on separate filegroups is to make sure that you can independently perform backup operations on partitions.

Now, let’s take an example to define all these objects and apply them onto a data table to make it partitioned table.

e.g. A fact table ‘dbo.Fact_Orders’ where we receive lots of order transactions (OrderId, Amount, SalespersonID, OrderDate) and this is being queried based upon orderdate and hence we need to partition it upon OrderDate.

Create Partition Function:

PF1

This partition function partitions a table into 4 partitions, one for each quarter of a year 2014 worth of values in a datetime column.

‘RIGHT’ keyword defines the boundaries that partitions will have records on the basis of OrderDate (or any column this partition function will be applied upon) like

Partition1 -> OrderDate < ‘01Jan2014’

Partition2 -> OrderDate >= ‘01Jan2014’ AND OrderDate < ‘01Apr2014’

Partition3 -> OrderDate >= ‘01Apr2014’ AND OrderDate < ‘01Jul2014’

Partition4 -> OrderDate >= ‘01Jul2014’ AND OrderDate < ‘01Oct2014’

Partition5 -> OrderDate > ‘01Oct2014’

As shown right most partition ‘Partition5′ will contain all the values >’01Oct2014′.  LEFT is the default value unless specified.

Now if we want to check which partition a particular record (a date) resides into:

PF2

 

 

 

Create Partition Scheme:

PS1

This will make it to store all the partitions on a single filegroup i.e. the primary filegroup.

If we want different partitions to be stored into different filegroups, we need to specify them explicitly as below:

PS2

 

 

These filegroups need to be pre-defined. If extra filegroup is specified then that will be used for next used partition. We will talk about later how to add new partitions to the existing function.

Now while creating the table we can assign this partition scheme (& thus the partition function) to the table by means of the partitioning column and hence table structure will be inline to this portioning strategy.

PTable

ALTER Partition:

As we have defined certain partitions for our fact table based upon the current requirement i.e. for the year 2014. Now, in the next year, let’s say we need to add some more partitions. It sounds like what we want to do is to create a new boundary for our existing partitioning implementation e.g. add another partition for Q1 of 2015. Also, we will later talk about how to remove an existing obsolete partition which is no more required.

And this can be implemented via ALTER PARTITION FUNCTION statement as follows:

SPLIT RANGE: This adds a new partition boundary to the existing partition function by splitting one the existing ranges into two.

PF3

 

 

This will work absolutely fine because we have already got an additional filegroup left unassigned to accommodate a new partition. So, when we add filegroups to a partition scheme and mention some extra filegroup at the end which will automatically be marked as NEXT USED i.e. to be used for the next partition to be added and hence nothing else needs to be done.

But, if we haven’t defined any extra filegroup to the partition scheme, then before splitting the range of a partition function, we would need to alter the partition scheme as below:

PS3

 

 

Or in case Primary filegroup to be used:

PS4

 

Posted in SQL Server, SQL Server 2008 | Leave a comment

DELETE vs TRUNCATE – SQL Server

Although this is a very simple question that ‘what is the difference between using DELETE vs TRUNCATE statements in SQL Server’, & both are being used very frequently serving more or less the same purposes, but there are various aspects we should keep in mind before using them more effectively. And I am sure the below explanation would make you think before using DELETE vs TRUNCATE next time –which most of us do use one of these seeing as an alternate to the other.

Basic differences in terms of usage:

  1. TRUNCATE deletes the entire table data. DELETE can be used to delete either entire table data or selected table records by using WHERE clause
  1. TRUNCATE does reset the SEED value of Identity column to the default value (starting value) whereas DELETE doesn’t do that
  1. Truncate doesn’t invoke Triggers whereas Delete does
  1. Truncate is DDL (Data Definition Language) and Delete is DML (Data Manipulation Language)
  1. TRUNCATE TABLE cannot used be used when a foreign key references the table to be truncated. If all table rows need to be deleted using TRUNCATE and there is a foreign key referencing the table, you must drop the index and recreate it. However, DELETES just needs all the referenced rows/data to be removed first.

6. Truncates need db_owner and db_ddladmin permission.

In terms of Performance – internal working

  1. TRUNCATE is faster than DELETE – because Truncate needs locks on the table and schema but do not need locks on rows of the tables as in case of DELETE.

TRUNCATE TABLE is a statement that quickly deletes all records in a table by de-allocating the data pages used by the table. This reduces the resource overhead of logging the deletions, as well as the number of locks acquired; however, it bypasses the transaction log, and the only record of the truncation in the transaction logs is the page de-allocation. Records removed by the TRUNCATE TABLE statement cannot be restored. The unhooked pages/data will be removed synchronously or asynchronously (called as deferred de-allocation) based upon whether the data table is small enough or quite large respectively.

Whereas DELETE TABLE statements delete rows one at a time, logging each row in the transaction log, as well as maintaining log sequence number (LSN) information. Although this consumes more database resources and locks, these transactions can be rolled back if necessary.

A FEW MYTHS HERE……

Myth1: Truncate cannot be rolled back.

Fact: That’s not true. Truncate Table command is transactional. It can be rolled back – if it happens within an explicit transaction.

Let me reiterate it to illustrate the reality….TRUNCATE can be rolled back only if

  1. Explicit Transaction(s) are opened, i.e. by recording which pages and extents were de-allocated, there’s enough information to roll back, by just RE-allocating those pages later. And
  2. Current Session is not closed which means truncated data can’t be rolled back by means of the Log Files even if database is set to Full Recovery mode then.

On the other hand DELETE can always be rolled back.

A small question to be answered here is – How does SQL Server know not to reuse the pages that belonged to the table? It turns out the pages and/or extents involved are locked with an eXclusive lock, and just like all X locks, they are held until the end of the transaction.  And as long as the pages or extents are locked, they can’t be deallocated, and certainly cannot be reused.

Myth2: Truncate generate less log records. It depends. If table is small enough, truncating table will generate more logs.

Up to this point it’s fairly clear what should be used as per the requirement. Now, last but a substantial point also to be considered as explained below.

Till this point it seems TRUNCATE is a better statement over DELETE, but wait a minute. This may not always hold to be a true statement. Given a scenario where a large table is to be deleted for the existing records (may be done via Partition switch/merge etc) & at the same time data is being selected/inserted from/into the same table.

As stated above, Truncate statement takes an exclusive lock (perhaps app_lock_Mutex )on the entire table – and in case of the larger table, it is not accessible during that time – it results into a deadlock. And hence it is advisable to use DELETE, but perform the deletes into smaller batches.

Similar to this, sometimes deadlock is also encountered due to the following:

Each temporary table has an in-memory structure that contains a counter of all the pending transactions that operated on the table. When this counter decreases to 0, the temporary table is dropped in an autonomous transaction. However, the TRUNCATE TABLE statement does not increase the counter. Therefore, if the autonomous transaction tries to drop the temporary table before the transaction that runs the TRUNCATE TABLE statement commits, a deadlock occurs. The deadlock occurs between the autonomous transaction and the transaction that runs the TRUNCATE TABLE statement.

Posted in SQL Server, SQL Server 2008 | Leave a comment

Replicate/Migrate data from one database/table to another – Part 3

As we discussed in Part 1/Part 2 about the very simple approaches of migrating the changes from one data table to another, but depending upon the situations, data volume etc., there are a few improvements needed in those approaches.
In this article, we will talk about those limitations of the first approaches and will go through another one taking care of those limitations.

Limitations of part1/part2:

  1. Large data volume of source/target table(s): In case the table being merged are carrying loads of data, running merge joins on these table will be slower. And in case, there are not so many changes expected at a time, it would again be an overhead of running the checks on the complete data set.
  2. Large number of changing attributes: In case of tables having good number of columns (say 100 columns) and most of them are potential changing attributes i.e. any of those so many fields can change (from source to target), but we are not sure which one is changed while merging them. And therefore, running OR condition (as mentioned in Part1) to check if there is any change or not, may be so slow that it can result a severe DB server load.

Better approach in case of situations listed above….

  1. Dealing with large data tables:

If we are expecting not so many changes at a time (from the time merge process is run last), it‘s always good to identify the changed (new, updated) records using some timestamp column.

E.g. Assume if we have a ModifiedDate column into both source & destination table. Then at the beginning of the merge process, first identify the records modified, inserted newly or so; after the last run/merge.

(Note: Here we are assuming that source table is not truncated & reloaded full, otherwise we would need to perform the merge check on complete data-set)

  1. Get the last run date (modifiedDate) – this can be either be achieved by fetching the MAX(ModifiedDate) from the target table or when merge operation is being performed the current datetime can be stored in terms of some configuration table or so & then to be used while the next run.
  2. Get all the records from the source table – either into a temp table or so

WHERE Source.ModifiedDate > MAX_ModifiedDate

  1. Perform the merge operations either using MERGE statement or by means of SQL JOINS from the source’s temp table (with records > Max_ModifiedDate) to the target for the matching records (upon the joining keys).
  2. Update the target’s modifieddate and/or update the configuration date with the current date of the operation being performed at.

This will cut down the whole process to a limited numbers of records which are new or updated after the last run.

N.B. In case of the soft delete, UPDATE operation will take care of the same. But for the hard delete operation (where records are physically deleted), we would still need to do it on the complete table.

1. Find the changed records more efficiently: Now as mentioned in the previous approaches, when we tried to find out which records have actually changed so that update operation can be performed upon, we used OR statements e.g.

WHERE Source.Col1 <> Target.Col2

OR

Source.Col2 <> Target.Col2

……..

This would be a very slow affair if change is to be looked at very large number of attributes/columns.

There are a few other solutions to deal with it…as to find out the changed records in a more efficient way.

  1. By using INTERSECT
  2. By using EXCEPT

Usage of these both the clauses are more or less similar. We will take an example of INTERSECT here.

  1. First find out the matching records using INTERSECT

–Take the changed records from source table into a temp table – after the last run

SELECT * INTO #T_SourceEmp

FROM dbo.T_SourceEmployee

WHERE ModifiedDate > MAX_ModifiedDate –MAX(target.ModifiedDate) into a variable

–Get the intersected date

SELECT * INTO #CommonData

FROM

(

SELECT

EmployeeID

,EmployeeName

,DateOfJoining

,Designation

,ManagerName

FROM #T_SourceEmp

INTERSECT

SELECT

trg.EmployeeID

,trg.EmployeeName

,trg.DateOfJoining

,trg.Designation

,trg.ManagerName

FROM dbo.T_DestinationEmployee trg

INNER JOIN #T_SourceEmp src

ON trg.EmployeeID = src.EmployeeID

)

So, #CommonData contains the records which don’t have any difference from source to destination table (although their modifieddate was changed) and thus to be ignored.

And records which are updated on source but are not into #CommonData table are to be identified as these would be the actual changes.

  1. Find the updated records and perform the update operation:

SELECT src.* INTO #T_Updates

FROM dbo.T_SourceEmployee src

LEFT JOIN #CommonData CD ON src.EmployeeID = CD.EmployeeID

WHERE

ModifiedDate > MAX_ModifiedDate –MAX(target.ModifiedDate) into a variable

AND CD.EmployeeID IS NULL  –i.e. not existing as common/unchanged data

UPDATE trg

SET  trg.EmployeeName = upd.EmployeeName

,trg.DateOfJoining = upd.DateOfJoining

,trg.Designation = upd.Designation

,trg.ManagerName = upd.ManagerName

FROM dbo.T_DestinationEmployee trg

INNER JOIN #T_Updates upd

ON trg.EmployeeID = upd.EmployeeID

 

Posted in SQL Server, SQL Server 2008 | Tagged , | Leave a comment

Replicate/Migrate data from one database/table to another – Part 2

To serve the purpose of data synchronisation between two data table; as discussed in Part1, SQL Server 2008 was provided with a new feature of MERGE statement.

In this article we gonna go through the usage of MERGE statement/functionality and then briefly talk about the limitation/issues with that.

I think it’s better to take reference from the MSDN rather than replicating the same thing here again.

So here we go: http://msdn.microsoft.com/en-us/library/bb510625.aspx

A few main excerpts:

  1. Simple in use as UPDATE, INSERT & DELETE can be performed in one pass.
  2. At least one of the three MATCHED clauses must be specified, but they can be specified in any order. A variable cannot be updated more than once in the same MATCHED clause.
  3. The MERGE statement requires a semicolon (;) as a statement terminator. Error 10713 is raised when a MERGE statement is run without the terminator.
  4. When used after MERGE, @@ROWCOUNT (Transact-SQL) returns the total number of rows inserted, updated, and deleted to the client.

Limitations:

  1. For every insert, update, or delete action specified in the MERGE statement, SQL Server fires any corresponding AFTER triggers defined on the target table, but does not guarantee on which action to fire triggers first or last. This may result into some errors or data inconsistency if not handled properly.
  2. Although, source table can be sourced from a query/temp table etc. which can be manipulated as per whether the whole table or a part of the table to be MERGE operation to be performed upon, but the target table is to be taken as the complete table – and therefore it’s almost impossible to identify the actual INSERTES/DELETES; only update can be performed properly in this case.
  3. Not so performance at times and very less scope to tweak the statement to make it perform better.

Issues:

  1. Merge Conflicts / INSERT/UPDATE Race Condition: Let’s take an example where one record is intended to be inserted from source to target table since this particular records based upon the Primary Key/ Joining Key doesn’t exist at the target. The Merge statement under the default isolation level (Read Committed) will find it a potential Insert and will try to insert the same into the target table.

Now consider the scenario where another process tries to run the same merge procedure at the same time. This will also find the same record to be inserted.

But then when both of these processes (running in parallel) would try to insert the same record into the target table – Primary Key Violation error would be reported.

Actually, this shouldn’t be considered a bug. The atomicity of the MERGE transaction isn’t violated — it’s still an all-or-nothing behavior. So to prevent this conflict, a higher isolation level to be considered or table hint like WITH(HOLDLOCK) or                          WITH(SERIALIZABLE) to be used on the target table.

  1. Attempting to set a non-NULL-able column’s value to NULL: Sometimes while using the functions like ISNULL() or any window function into the ON clause of the MERGE statement, this error is seen. Even though we know that source doesn’t contain any NULL value for the non-nullable column it’s updating or inserting to, but still this error is reported at times.

This is a SQL Server bug in the MERGE statement. And there are many theories for the reason of this issue. One of them is that if query optimizer is having table spool in there, chances are high that it would result this error.

 

Posted in Uncategorized | Tagged , | 1 Comment

Replicate/Migrate data from one database/table to another – part 1

This is about a very simple though very useful and frequently used requirement to replicate/merge data from one database/table to another.

There can be various scenarios wherein we do get data from multiple sources into one database/table and then we might be performing some data cleansing, transformation & enrichment processes on these data and then eventually forming a final record-set. This may be happening into some staging area (or into a database not exposed to external world). This is a good practice for multiple reasons but a very basic advantage is to segregate the heavy operations from the data table being accessed by the external users/applications.

So, once all is prepared here locally we need to replicate/migrate it to the external database/table. This comprises of inserting new records, updating the existing records and deleting (hard or soft based upon the requirement) onto the target table.

Requirement: Let me take a very simple requirement to depict the solution around wherein we have a source/staging table which has got some new employees added, a few attributes of some of the existing employees are updated and a few employees have left. We need to merge this to target table.

(Data cleansing, transformation & enrichment etc processes are out of scope here and for the sake of simplicity, we are assuming that both tables are in the same database otherwise if in the different database, we can use the fully qualified name of the table like database.schema.tablename)

SourceTable: dbo.T_SourceEmployee

Target Table: dbo.T_DestinationEmployee

PrimayKey:     EmployeeID

We will talk about the solution/approach first then will try with actual SQL queries.

Approach: So the basic idea is to, 1. Reach out to the records in the source table which don’t exist into the target table, 2. Find out the records which are there in the target table but don’t exist into the source table anymore and 3. Figure out if anything has changed for the records which do exist into both source and the target table

Now, what should be the order of executing these above steps? The best approach is to

  1. Update
  2. Insert
  3. Delete

Reason for this order of execution is that, if we insert first then we would need to operate on for larger record-set for further operations unnecessarily. We can perform the delete operation at the beginning though i.e. delete, update & then insert.

Technical solution: Now, there can be several coding techniques to achieve this outcome. We will discuss one of them and in the next we will talk about how to deal with the shortcomings of this first approach.

This is the approach of making use of JOINS.

  1. UPDATE

UPDATE trg
SET Designation = src.Designation,
ManagerName = src.ManagerName
FROM dbo.T_DestinationEmployee trg
INNER JOIN dbo.T_SourceEmployee src
ON trg.EmployeeID = src.EmployeeID
WHERE src.Designation <> trg.Designation –values likely to be changing
OR src.ManagerName <> trg.ManagerName

2. INSERT

–New records
INSERT INTO dbo.T_DestinationEmployee
(
EmployeeID
,EmployeeName
,DateOfJoining
,Designation
,ManagerName
)
SELECT
src.EmployeeID
,src.EmployeeName
,src.DateOfJoining
,src.Designation
,src.ManagerName
FROM dbo.T_SourceEmployee src
LEFT JOIN dbo.T_DestinationEmployee trg
ON trg.EmployeeID = src.EmployeeID
WHERE trg.EmployeeID IS NULL


3. 
DELETE

–Delete no more valid records
DELETE trg
FROM dbo.T_DestinationEmployee trg
LEFT JOIN dbo.T_SourceEmployee src
ON trg.EmployeeID = src.EmployeeID
WHERE src.EmployeeID IS NULL

There are a few shortcomings to this above approach/solution, which we would be talking of in the forthcoming articles – .

—-Scripts to try with
CREATE TABLE dbo.T_SourceEmployee
(
EmployeeID INT PRIMARY KEY
,EmployeeName VARCHAR(255)
,DateOfJoining DATE
,Designation VARCHAR(255)
,ManagerName VARCHAR(255)
)
GO
CREATE TABLE dbo.T_DestinationEmployee
(
EmployeeID INT PRIMARY KEY
,EmployeeName VARCHAR(255)
,DateOfJoining DATE
,Designation VARCHAR(255)
,ManagerName VARCHAR(255)
)
GO
INSERT INTO dbo.T_SourceEmployee
VALUES(‘101′, ‘John’, ’01Jan2014′, ‘Associate’, ‘Paul’)
,(‘102′, ‘Richard’, ’01Feb2014′, ‘Manager’, ‘Simon’)
,(‘103′, ‘Ridham’, ’10Feb2014′, ‘Software Engineer’, ‘Shankar’)
GO
UPDATE dbo.T_SourceEmployee
SET Designation = ‘Sr. Associate’
WHERE EmployeeID = 101
GO

 

Posted in SQL Server, SQL Server 2008, Uncategorized | 2 Comments

Welcome back !

Hi All,

After being active on my blog for 2 years and then being away from it for good 3.5 years for certain reasons, I am back here !

Over these last 3.5 years, many of my colleagues & visitors suggested me to make a come back, but I think now the time has come to put some stuff which may make someone’s life easier….and before I completely forget all this :)

Hope to get the full support & response from you all !

Cheers !!

Posted in Uncategorized | Leave a comment

Dynamic Named Sets in SQL Server 2008 Analysis Services (SSAS 2008)

Introduction:

SSAS is having a nice & useful feature of defining the calculations (Calculated Members & Named Sets). These calculations can then be used in MDX or by any other client tools etc. These calculations come very handy to allow the user to use them to analyse their data with required parameters, attributes, & custom calculations. Using Named Sets, we can predefine the member(s) which are based upon certain conditions and then user can directly use them to see the measure/data against those members.

Let’s say we want to offer the user with a set of last 5 days/dates which contains transactional data, even though there may be lot of other later dates but with no fact data.

So for this we can create a regular Named Set on Calculation tab, something like:

 CREATE SET ActualSalePeriods AS

Tail

(      NonEmpty([Date Order].[Calendar].[Month].[Date].Members,

                        Measures.[Sales Amount]),

5);  

 Every calculated member/named set created here is session scoped & created with keyword ‘Create’ behind the scene.

 These Named Sets are evaluated and created every time the cube is processed so at the time of next process, if cube gets data with new dates, it will refresh the result set of this create Named Set as well accordingly. So we can say, they are static is nature and contents will remain intact until cube is reprocessed.

Dynamic Named Set:

These regular Named Sets are Static in Nature (only possible option till 2005 version) to improve the query performance by not automatically refreshing/querying it against cube every time. And in fact, works well in the situations like getting latest 5 dates as mentioned above since as in when new dates get added with data in the cube, it will be reprocessed & above set would also be updated with latest members then.

However, this may result a serious issue with respect to the actual requirement which (let’s say) is to get the latest Top 10 employees with highest sales on monthly basis. Created Named Set (static), evaluated at the time of processing, will give the correct result set with those employees. But now in case user wants to slice/dice the result set based upon some other dimension say region, product etc, again user will be offered with the same result set generated earlier at the time of processing, i.e. wrong data information.

 To cope up with such issues/requirements, SSAS 2008 has come up with one more option for creating the Named Set. It’s named as ‘Dynamic’. So developer can either select Static (default) to have the regular Named Set or can select Dynamic to give the Named Set a dynamic behaviour. And by selecting Dynamic option, SSAS will add a keyword ‘Dynamic’ before SET keyword to make it dynamic named set.

Dynamic sets are not calculated once. They are calculated before each & every query, and very important, in the context of that’s query WHERE clause and sub-selects. 

Thus the above requirement of getting Top 10 sales employees will work absolutely fine with Dynamic Named Set.

 Imp: Dynamic named set not only resolves the above mentioned issue but also adds capability to MDX which was not able to generate the desired results with regular named set. This is in regards with the issue while using the multi-select dimensions in the sub-select rather than where clause (which is case with excel wherein it always converts where to sub-select). So using Dynamic sets, multi-select statements are evaluated properly even being used in sub-select statements.

Posted in SQL Server 2008, SSAS 2008 | 6 Comments