The following issues should be considered when creating database in the context of the FILEGROUP:

1. Set reasonable size of the database.

This could be done by calculating the required amount of size for the database in advance. This could be done by calculating the size of individual table that should be in the database. Following issues can help to calculate database size more efficiently:

a. Calculating Clustered index or heap size by calculating followings:

i. Number of column

ii. Number of Row

iii. Number of fixed column

iv. Number of variable column

v. Max_var size

vi. Variable_Data_Size

vii. Row_Size

viii. Rows_Per_Page

ix. Num_Page etc.

b. Calculating Non-Clustered index size

i. Num_Row

ii. Num_Key_Cols

iii. Fixed_Key_Size

iv. Num_Variable_Key_Cols

v. Max_Var_Key_Size etc.

2. Set reasonable size for the transaction log:

The smaller the database, the bigger should be the transaction log size.

3. Leave AUTOGROW feature ON when necessary for the data files and log file.

This feature will automatically increase the allocated resources when necessary. This feature is very fruitful when there is no DBA or less experienced DBA.

4. Set reasonable size for the AUTOGROW increment.

Specifying size will lessen the frequency of AUTOGROW occurrence. As AUTOGROW degrade performance, if we limit the frequency of AUTOGROW this will increase performance.

5. Don’t set the AUTOSHRINK feature.

As AUTOSHRINK degrade performance a manual SHRINK in an off-peak will give better performance.

6. Create User defined filegroup and make it the default filegroup.

It’s a good decision to manage system and user-defined object separately.

7. Create User defined filegroup and create some table in it to run maintenance tasks.

It would be nice to create separate filegroup for maintenance task like: Backups, DBCC, Update statistics etc. Also locate object under similar maintenance plan in the same filegroup.

8. Try to have one file per disk physical array

When a table is accessed sequentially a separate thread is created for each file on each disk array which leads to parallel processing and resulting performance increment.

9. Don’t create many data and log files on the same physical disk array.

Autogrow feature ON may leads to fragmentation of those file. So it tries to have less number of these files on each physical disk array (recall: No 8).

10. For heavily accessed tables, place these tables in a single filegroup and place table indexes in a different file group on different physical disk arrays.

Will be easy to maintain and also there will be separate thread if the disk array is different.

11. For heavily access tables with image/text columns, place this table in one filegroup. And place these text/image columns in different filegroup on different physical disks.

This feature can be created by CREATE TABLE statement with TEXTIMAGE_ON.

12. Place log file on different physical disk arrays than the location of data files.

As log file are more write-intensive, it’s important that disk contains log file have sufficient disk I/O performance.

13. If any join queries used much more often than other, then place the table used in the query in different filegroup on different physical disk arrays.

Recall the No. 8.

14. Place Read-Only tables in different filegroup in different physical disk array.

This will increase performance also allows to control permission easily. This feature can be created using ALTER DATABASE and making filegroup READONLY.

Use Windows NT performance monitor to determine the appropriate number of data and log file on server by checking the Disk queue length counter