Some time ago during an interview for a Database Administrator position I was asked what the most common problems were that I would initially find.   I thought this was a pretty good question and my mind went racing through the many different situations that I encountered.   I definitely didn’t want to offend anyone and from the previous interviews and questions I already had an idea about what kind of problems were in their environment.   There are a lot of common problems I have seen and almost stumbled on the question.  

 

I probably should say that the ability to recover a database was the most prevalent I have seen.   Especially in environments with many instances and thousands of databases.   There are a lot of common problems. Any answer could have been true and satisfy the interviewer's question. Problem is, that this is me we are dealing with.   After the question my thoughts went directly to database files and that is what I responded with.   Files seem to be ignored in so many cases and they do have an impact.   I could write so much about our beloved database files that I would put you, the reader to sleep.   In the future I may extrapolate on the potential topics.

 

Every DBA has their own individual idea of what is most important and what they tend to focus on and how they go about it.   My ego isn’t big enough and I am not naïve enough to think there is one way to do something.     What I am sharing is only my opinion.   Your opinion is welcome and solicited!   My primary focus is generally performance.   Everything that is done to a database from the administrative side to the application effects the whole.   Even database growth is a factor.   A change in one area affects the others.   So many times I have heard that “nothing has changed” when a problem arises.   One constant about most databases is that they are always changing unless it is some type of static database that may be in read only mode.   It is in the planning of the change and the monitoring of the change that I believe to be of importance.

 

Some other ‘common’ issues I have seen include and thought about responding with include, but not limited too:

  • Backups and recoverability
  • Security
  • Change control processes
  • Job and batch scheduling
  • Maintenance implementation
  • Monitoring
  • Data models
  • Documentation
  • Index and partitioning
  • Over complication of architecture.
  • Application and query design.

 

Before I drift off and go into detail about all of these potential issues I should get back discussing the database files.   Each one of those mentioned above is a topic within itself.     My reasoning for picking database files in my response is that I see them as a foundation of the database.   This is what the database structures are built upon.   This is where it all starts after the hardware is established (which could easily be argued as the foundation).  

 

Problems that I have seen with the files include, but are not limited to:

  • File growth settings on database files too large or too small.
  • High number of virtual log files
  • Inconsistent or improper file placement
  • Shrinking databases and files
  • Overthinking filegroups.
  • Naming conventions
  • No monitoring of the space used within a database file

 

SQL Server file architecture goes from Database -> Filegroup -> Logical File Name -> Physical Database File.   Each database has at least two physical files. One for the partitions and another for the transaction log. You will commonly see *.mdf (main data file) and *.ldf (transaction log file) file names.   These are the ‘two’ files.   The database can have multiple filegroups and a filegroup can have multiple files.     From a development perspective only the filegroup needs to be thought of. Think of this as the logical layer for which partitions are built upon. A partition could be a table or an index even if you do not consider them partitioned.  To be accurate I will refer to tables and indexes as partitions.

 

It should be prudent to build any application, database, infrastructure, etc. to be maintainable and scalable.   The same goes for files.   Pointing and clicking a database into existence just may leave you or someone else stuck in a corner someday.   Working hard to get out of it.   Databases grow and situations change. Best to be prepared.

 

Database files can grow. Really, trust me, they can.   If the files are set to automatically grow (which most seem to be) then when they grow in whatever specified increment the the system is going to slow down. This introduces i/o overhead.  I’ve seen plenty of log errors because of the delays.   I’ve never been a supporter of growing database files in percentages.   If you are not paying attention these percentages then you may be seeing some large growth increments that are hurting other processes. You may fill up a disk because of this as well.   Another issue I have with percentages is that the increment is too small and they keep reoccurring or introducing small virtual logs.     Best practice would be to have a good understanding of the file needs and grow them manually with possibly leaving the autogrow on for safeties sake. I’ve rarely had time for that so I cheated and always had enough space in the filegroup to rebuild the largest partition. That usually gave me enough room to wiggle and the file(s) would grow during the maintenance window.   When setting growth increments be consistent with all of the files within a filegroup and keep these files the same size.

 

Virtual log files are also a consideration with growth settings and the initial creation of the transaction log.   I’ve seen a 2 Terabyte database take over an hour to come back online after a restart because of the virtual logs. This was on hardware that was smoking fast. Yikes! It was the virtual log files causing the delay when rolling the transactions. Have you ever wondered why a rollback is taking so long? Might want to look at the transaction log. Virtual logs also affect normal database transactional activity. These are important and it wouldn’t hurt to be aware of them. And it is so easy to check and correct this.

 

Did I mention I am obsessive compulsive about files? I’ve gotten better over the years and learned to pick my battles.   File placement is one of these areas.   For one reason or another database files end up being all over the place.   Same drive as the operating system.   No need to stop the operating system when a database file set to grow by percentage needs 100 megabytes and fails it’s attempt to grow 10 Gigabytes.     When monitoring disk contention it is good to have the database files on drives just for database files. This helps with isolation of problems.   This is one of those areas that may not seem that important, but I believe that it is. There are many reasons for this. For example, moving and restoring databases, reducing bottlenecks, general housekeeping of folders and files, managing growth, and plenty of other things that I am not thinking of right now.   Everything does tie together in some way or another and understanding cause and effect relationships can help you sleep at night. Literally.

 

The database shrink is the enemy of everyone.   Okay, almost everyone.   There is good reason for this at times.   Say for example that you have a 100 Gigabyte file that is out of space within it and no more drives space for this file but there is another data drive that has plenty of space.   Another file could be added to the filegroup and the partitions could be spread evenly across the files.   After the even distribution of the data within the files the 100 Gigabyte file could be shrunk.   Only problem is that shrinking the file introduces fragmentation.   That is why I say shrinking files is the enemy of everyone.   Even when it needs to be done.   A repeated reason for doing this is to save drive space.   This is usually on some daily or weekly schedule. This doesn’t make sense. If it is just going to grow again then fix the problem.   Add more drive space, set growth settings, add another file, have smaller transactions, issue a checkpoint if needed, increase the frequency of transaction log backups, or do whatever it takes to remedy the root cause. Just try to avoid shrinking the files if you can.  

 

I like to be able to look at a file on a drive and know exactly what it is.   In many cases there are old database files consuming space that no one is aware of. I also do not want to have to think or go look up a name of a file in the middle of a crisis. It is also handy for automation to have consistent naming conventions.   I like things simple. Planning and consistency help.   I borrowed a naming convention from one of my Oracle mentors and have stuck with it ever since. Had to do a little conversion from the schema/tablespace architecture to the database/filegroup architecture in SQL Server. But it works nicely for me.   The naming convention is <DatabaseName>_<Filegroup>_<File Number>.mdf/ndf/ldf.   In real life this looks like model_PRIMARY_01.mdf for the first file in model’s PRIMARY filegroup.   The logical file name would be PRIMARY_01 and the filegroup name would be PRIMARY in this situation.

 

Speaking of the model database, the database that is the ‘model’ for new databases, I like to set the file sizes and growth settings for the both the primary data file and the transaction log file. This helps with the creation of new databases on the instance. It has no effect on databases that are restored or attached to the instance.

 

It is one thing to monitor the space used by a database file. Another to monitor the space utilized within a file.   Just because a file is one size doesn’t mean that there is that much data.     Think about monitoring the filegroup.   How large the is the filegroup and how much space is used. I prefer the files within a filegroup to be the same size and to have an equal amount of data in each file.   There are possible performance implications with have different sizes of files within a filegroup, but my main reason for doing this is managing growth and drive space.

0
0
0
s2sdefault