I have a SQL Server 2008 database. This database has a column that represents a "bit". This bit flag now needs to be an int. My problem is, there is a already a lot of data in the table. Is there a way to easily do this conversion? Considering I'm using a larger data type, I don't see the problem. However, the problem is the containing data. How do I map my bit fields to the corresponding int values all while changing the column type.
preguntado el 08 de noviembre de 11 a las 19:11
ALTER TABLE TableName ALTER COLUMN ColumneName int GO
change TableName to the name of your table and ColumneName to the name of your column
SQLMenace has the right answer.
If the table is simply enormous, though, the change could take a while and could block for a long time. If so, and your database can't tolerate a long down time for this table (perhaps it's 24-hour OLTP?), my suggestion would be to do something like this:
--Add a new temporary column to store the changed value. ALTER TABLE dbo.TableName ADD NewColumnName int NULL; CREATE NONCLUSTERED INDEX IX_TableName_NewColumnName ON dbo.TableName (NewColumName) INCLUDE (ColumnName); -- the include only works on SQL 2008 and up -- This index may help or hurt performance, I'm not sure... :) -- Update the table in batches of 10000 at a time WHILE 1 = 1 BEGIN UPDATE X -- Updating a derived table only works on SQL 2005 and up SET X.NewColumnName = ColumnName FROM ( SELECT TOP 10000 * FROM dbo.TableName WHERE NewColumnName IS NULL ) X; IF @@RowCount = 0 BREAK; END; ALTER TABLE dbo.TableName ALTER COLUMN NewColumnName int NOT NULL; BEGIN TRAN; -- now do as *little* work as possible in this blocking transaction UPDATE T -- catch any updates that happened after we touched the row SET T.NewColumnName = T.ColumnName FROM dbo.TableName T WITH (TABLOCKX, HOLDLOCK) WHERE T.NewColumnName <> T.ColumnName; -- The lock hints ensure everyone is blocked until we do the switcheroo EXEC sp_rename 'TableName.ColumName', 'OldColumName'; EXEC sp_rename 'TableName.NewColumnName', 'ColumName'; COMMIT TRAN; DROP INDEX dbo.TableName.IX_TableName_NewColumnName; ALTER TABLE dbo.TableName DROP COLUMN OldColumnName;
My script is untested... might be a good idea to test it first. :)
Doing the update in batches keeps the transaction small, prevents huge tempdb usage, tran log growth, and long locks (the table can be used by other clients in between each update loop). 10k is often a good size, but sometimes smaller numbers can be needed depending on how long it takes. Ideally you'd pick a size that used a good portion of memory but didn't squeeze out anyone else and used little or no tempdb. I've cut multi-hour updates against huge tables to mere minutes (and more importantly, sin bloqueo minutes) with this kind of looping strategy.
Note: for those experimenting with the performance of this strategy, the nonclustered index I suggested will help most during the transaction portion of the script. For a simply enormous table, a different nonclustered index could help:
CREATE NONCLUSTERED INDEX IX_TableName_NewColumnName_Null ON dbo.TableName (NewColumName) WHERE NewColumnName IS NULL; -- SQL 2008 and up only
On the other hand, adding the original nonclustered index could take longer if it's done after updating the new column data, or potentially block. Or it may not take much time at all. Experimentation is in order.
I have a lengthy article on this topic at Columnas de la tabla de SQL Server bajo el capó. Do as SQLMenace suggests, using
ALTER TABLE ... ALTER COLUMN. Because this alter increases the column size the change cannot be metadata-only, it will need to do a size-of-data operation (update every row). But this will be transparent to you (read the linked article for more details). However, you need to be prepared to handle a potentially large transaction as the update of every row progresses, so make sure you have plenty of log room so you don't run out of log space due to the large transaction.