Sometimes you may find your SQL Server database is performing at suboptimal levels. After a lot of use, especially in a production environment, your database will have endured a high number of transactions, object creation and drops, and these can lead to fragmentation of the data at the disk level. You may also find that your database is quite large, but it has some empty space. However, when you attempt to shrink the database, you cannot do it. In this case, a common culprit is fragmentation.
Unfortunately, the only real way to fix fragmentation is to defragment the database, or completely realign the data at the disk level. This can be incredibly time-consuming, as your database may have a lot of tables and indexes. Additionally, manual work always has the potential for error, such as missing a particular index or table.
Fortunately, with the help of a few SQL Server system views, this process can be relatively easy. In fact, I have created a script to automate this task, which should help ease the pain a bit. Granted, it still may take some time, as it still has to iterate through each table in the database one by one, but overall, it is much easier (and safer) than manual effort, as you only have to set up the script once and then apply it to whichever tables you want to defragment.
The only main caveat with defragmenting via this method is it requires that you have extra disk space, as you will have to create at least one new file and filegroup. However, this can actually be a boon for your database optimization, as doing this offers you a chance to completely reorganize how your data is stored. Because you need to have the extra space for this process, you can think about where you would like to store the new database file(s). Maybe you have the ability to create several new files, and put them all on separate drives. Whatever the case, whether you simply want to defragment one file or create new files on different drives, defragmenting your database will drastically improve the performance of your database.
Before I show you the script, I want to point out a few things:
- Note that the script still may take a lot of time, so you may want to set aside some downtime for your database to run this script when the database is not being used. Alternatively, you can run the script for one table or schema at a time. A third option is to use the WITH=ONLINE option for the indexes: make sure to do some research on what exactly this does, but you can add it to the commands that affect the indexes.
- You may want to change the default filegroup of your database. This ensures that all new, incoming data in the future will go to the right place.
- Fragmentation is inevitable, especially in environments with a lot of different DML statements, as those will affect the way the data is stored at the disk level, so you will probably have to do this again eventually. However, if you took the chance to reorganize your files, filegroups, and drives, as I mentioned above, it may be a while before this becomes any sort of real problem and you absolutely have to do this again.
- After executing the below script, it would behoove you to shrink the currently-fragmented files, as this will allow you to regain the space that is currently allocated to this database. Basically, this ensures you won’t be using at least double the space you actually need.
- This script focuses solely on what objects currently exist; it does not analyze performance, or whether a particular index is necessary or even being used. As such, you should do some digging on your own to decide which objects you want or have to keep, and which are wastes of space.
- This script does not affect or move partitioned tables. Those should be maintained in a different manner.
- Make sure to specify/switch to the correct database before you execute the script.
- As always, be sure to test this script before running it in production. The script worked for our environment, but it is possible that either you have a different configuration for which I did not account or I missed something. You should still be able to use this script as a starting point, then alter it to make it suit your needs, but I strongly encourage you to test it first.
For the below script, there are four parameters. @CurrentFileGroup in which the fragmented data currently exists. @NewFileGroup is the filegroup to which you want to move the data from the @CurrentFileGroup. These two parameters are necessary, and they must be specified. Additionally, both filegroups must exist in the database before you execute the script. @SchemaName is the name of the schema that you want to move. @TableName is the name of the table that you want to move. You may specify @SchemaName without @TableName if you want to move an entire schema, but you cannot specify @TableName without @SchemaName, as a table must have a schema. Finally, @SchemaName and @TableName are optional, but if you don’t specify them, the script will move all tables in the database in the @CurrentFileGroup and to the @NewFileGroup.
The easiest way to move a table to a new filegroup is to create a clustered index on the table on the new filegroup. This is exactly what the script does. First, it analyzes the tables specified in the @CurrentFileGroup. If they have any indexes, the script moves them exactly as they are to the @NewFileGroup. If a table does not have a clustered index, the script will create one on the @NewFileGroup and then drop it once complete. (If the table has a nonclustered index, but not a clustered index, the script will move the nonclustered index as-is to the @NewFileGroup, create a new clustered index on the @NewFileGroup, then drop the newly-created clustered index.) Maintaining index column order for the existing indexes, and dropping the newly-created ones, ensures that the table structure remains the same after the table migration. This script handles all indexes, as well as primary key and unique constraints. Regarding errors, the script will output a message for both successes and failures. For failures, it will output the index and table that failed migration, as well as the SQL error, which should help you troubleshoot the issue.
After all that, I think you’re ready to see how this works. Here is the script:
And there you have it! After running this script, your database will be de-fragmented for the non-partitioned tables in the specified filegroups. Run some performance tests to see how it affected your database optimization. Your database should be smaller in disk size, and it should run more optimally. If this is not the case, you may want to take a further look at your hardware and database file setup, but those are beyond the scope of this post. Thank you for your time in reading this, and good luck in your other DBA endeavors.