Tuesday, September 14, 2010

Rebuilding Master-Caching Issue

I recently encounterd an issue where I had to change the server levl collation settings and I decided to go with rebuilding master database.Itried to run this command line utility for SQL Server 2005.

start /wait \setup.exe /qn INSTANCENAME=SQLExpress REINSTALL=SQL_Engine REBUILDDATABASE=1 SAPWD= SQLCOLLATION= SQL_Latin1_General_CP437_CS_AS

I recived an error stating the "Installation package for the product SQL Server 2005 (64bit) cannot be found. Please locate a valid SQLRUN_SQL.MSI"

I tried to find out the resolution for this and after searching for around 2 hours I got the short and painless solution for this issue.I added ‘REINSTALLMODE=vomus’ which instructs setup to ignore the cached location of the .MSI file and use the local(specified) one.

start /wait \setup.exe /qn INSTANCENAME=SQLExpress REINSTALL=SQL_Engine REBUILDDATABASE=1 SAPWD= SQLCOLLATION= SQL_Latin1_General_CP437_CS_AS REINSTALLMODE=vomus

How to fix orphaned SQL Server users

A database user for which the corresponding SQL Server login is undefined or is incorrectly defined on a server instance cannot log in to the instance. Such a user is said to be an orphaned user of the database on that server instance. There are three possible ways to make a user orphaned

• Explicitly drop the login that user belongs to
• Restore database on another instance of sql server.
• Restore of an old master backup


Use database admin privilage account for resolving this issue.Find out the list of orphaned users for the database using below SP:

EXEC sp_change_users_login 'Report'

If you already have a login id and password for this user, fix it by doing:

EXEC sp_change_users_login 'Auto_Fix', 'user'

If you want to create a new login id and password for this user, fix it by doing:

EXEC sp_change_users_login 'Auto_Fix', 'user', 'login', 'password'

As per BOL ,Avoid using Auto_Fix in security-sensitive situations. Auto_Fix makes best estimates on links, possibly allowing a user more access permissions than intended. You can use Update_One which Links the specified user in the current database to login. login must already exist. user and login must be specified.

EXEC sp_change_users_login 'update_one', ‘UserName’,'LoginName'

Tuesday, September 7, 2010

SQL Server- Basics

The smallest unit of data that SQL Server works with is 8KB. This 8KB chunk of data is called pages. SQL Server therefore stores data on the disk in pages. In Memory also SQL Server manipulates those same 8 KB chunks (pages).

An entire record or row must fit within 8kb chunk or page when data is written to the disk. This means that sometime a single page can share more than one row. A row cannot span multiple pages.E.g. If a employee table has employee name, address, city, state, zip, phone then all that combined data must be written in less than a page (or 8KB). Few data types like LOB of text, binary data like files which stores pointer to the real data can be spread across multiple pages or even in file. All this data is gathered in to the data files on the disk which has either .MDF or .NDF as file extension.

When a Structured Query Language (SQL) is written in the query analyze of SQL Server then SQL Server's internal query optimizer looks at the query and constructs a execution plan for executing it (i.e. what all steps it will need to take in order to get that data off of the disk). SQL Server has a number of techniques it can use, some which are better in certain conditions than others. Once SQL Server has the plan, it executes it and retrieves the needed data off of the disk. If the data is requested from the client then the data is sent across the network.

In case of any modification query (DML), SQL Server first modifies the pages of that data in the memory. After modification it does not write those updated pages directly to the disk, but it makes a copy of the DML query result in a log file called transaction log. A transaction log file has .LDF extension. It keeps track of every transaction in the database. If it writes to the disk after every DML query is fired on the table then it would increase the system load to write records on to the disk.

SQL Server has an automated recovery mode that kicks in when it starts back up after a SQL Server crash. Once the server is up after the crash, it will decide to write the modified pages to the disk or not. It first checks in the transaction log file if the transaction is completed then it goes and writes to the disk. This is how SQL Server knows that the change is safe on the disk.

When SQL Server crashed, it has automated recovery mode that gets kick when SQL Server starts back up. It first go to the transaction log and checks for the uncommitted transaction or those which are not yet flagged. It knows that the flagged transactions are safe on the disk and the remaining was not copied to disk but still present in the memory when it crashed. In this case SQL Server reads those transactions from the log file, executes them again and immediately writes/commit those pages to the disk. This process allows SQL Server to catch up with all in-progress work and ensures that no loss of data occurred (provided database files are not corrupted). In short whatever happens in SQL Server it takes place through transaction log. Again it depends on the recovery model of the database. We have these 3 database recovery model – Full, Simple and Bulk-logged.

Individual databases can be changed from one recovery model to other depending upon the requirement or functionality. E.g. in Simple recovery model does not use transaction log (very less usage). It removes the transactions automatically to keep the log file size to minimal. Simple recovery is appropriate for read-only database that have no changes being made. If there is no change then there will not be any data loss during SQL Server crash.

This is how the data moves from disk to memory. This entire process is absolutely essential to how most of SQL Server's functionality actually works, as well as how to administer it.

Monday, September 6, 2010

Troubleshoot Suspect Database

Sometimes we encounter problem that the database is in suspect mode and cannot be recovered if we do not have the latest backup of the database. This may take place when the database is not closed properly before SQL Server shutdown. Sometimes the DBA try to detach the suspect database and try to attach the data and log file of the suspect database. It gives following error:

The log scan number (number:number:number) passed to log scan in database 'database_name' is not valid. This error may indicate data corruption or that the log file (.ldf) does not match the data file (.mdf). If this error occurred during replication, re-create the publication. Otherwise, restore from backup if the problem results in a failure during startup.

Sometimes we also try to repair the Suspect database using run the DBCC CHECKDB command with the REPAIR_REBUILD, but it fails with the following error:

Database 'databasename' cannot be opened. It has been marked SUSPECT by recovery. See the SQL Server error log for more information.

Following is the solution to resolve this issue:

If the DBA has detached the database try following steps:
1.Detach the suspect database.
2.Rename the existing “.ldf” to “_old.ldf” file
3.Use following attach command to attach the database which will create new log file.

CREATE DATABASE database_name
ON [ ,...n ]
FOR ATTACH_REBUILD_LOG

If DBA has not detached the database then

1.Set database to emergency mode by using following command

ALTER DATABASE SET EMERGENCY

Use Master
GO
ALTER DATABASE SET SINGLE_USER
GO
DBCC CHECKDB(database_name, repair_allow_data_loss)

ALTER DATABASE SET MULTI_USER

ALTER DATABASE SET ONLINE

The database will be back online.