SQL Server + Cambiar un tipo de columna

I have a SQL Server 2008 database. This database has a column that represents a "bit". This bit flag now needs to be an int. My problem is, there is a already a lot of data in the table. Is there a way to easily do this conversion? Considering I'm using a larger data type, I don't see the problem. However, the problem is the containing data. How do I map my bit fields to the corresponding int values all while changing the column type.


preguntado el 08 de noviembre de 11 a las 19:11

3 Respuestas

como este

ALTER TABLE TableName ALTER COLUMN ColumneName int

change TableName to the name of your table and ColumneName to the name of your column

ver también: When changing column data types use ALTER TABLE TableName ALTER Column syntax, don't drop and recreate column

respondido 08 nov., 11:23

SQLMenace has the right answer.

If the table is simply enormous, though, the change could take a while and could block for a long time. If so, and your database can't tolerate a long down time for this table (perhaps it's 24-hour OLTP?), my suggestion would be to do something like this:

--Add a new temporary column to store the changed value.
ALTER TABLE dbo.TableName ADD NewColumnName int NULL;
CREATE NONCLUSTERED INDEX IX_TableName_NewColumnName ON dbo.TableName (NewColumName)
   INCLUDE (ColumnName); -- the include only works on SQL 2008 and up
-- This index may help or hurt performance, I'm not sure... :)

-- Update the table in batches of 10000 at a time
   UPDATE X -- Updating a derived table only works on SQL 2005 and up
   SET X.NewColumnName = ColumnName
   FROM (
      SELECT TOP 10000 * FROM dbo.TableName WHERE NewColumnName IS NULL
   ) X;
   IF @@RowCount = 0 BREAK;

ALTER TABLE dbo.TableName ALTER COLUMN NewColumnName int NOT NULL;

BEGIN TRAN; -- now do as *little* work as possible in this blocking transaction
UPDATE T -- catch any updates that happened after we touched the row
SET T.NewColumnName = T.ColumnName
WHERE T.NewColumnName <> T.ColumnName;
-- The lock hints ensure everyone is blocked until we do the switcheroo

EXEC sp_rename 'TableName.ColumName', 'OldColumName';
EXEC sp_rename 'TableName.NewColumnName', 'ColumName';

DROP INDEX dbo.TableName.IX_TableName_NewColumnName;    
ALTER TABLE dbo.TableName DROP COLUMN OldColumnName;

My script is untested... might be a good idea to test it first. :)

Doing the update in batches keeps the transaction small, prevents huge tempdb usage, tran log growth, and long locks (the table can be used by other clients in between each update loop). 10k is often a good size, but sometimes smaller numbers can be needed depending on how long it takes. Ideally you'd pick a size that used a good portion of memory but didn't squeeze out anyone else and used little or no tempdb. I've cut multi-hour updates against huge tables to mere minutes (and more importantly, sin bloqueo minutes) with this kind of looping strategy.

Note: for those experimenting with the performance of this strategy, the nonclustered index I suggested will help most during the transaction portion of the script. For a simply enormous table, a different nonclustered index could help:

CREATE NONCLUSTERED INDEX IX_TableName_NewColumnName_Null ON dbo.TableName (NewColumName)
WHERE NewColumnName IS NULL; -- SQL 2008 and up only

On the other hand, adding the original nonclustered index could take longer if it's done after updating the new column data, or potentially block. Or it may not take much time at all. Experimentation is in order.

Respondido 22 ago 14, 05:08

I have a lengthy article on this topic at Columnas de la tabla de SQL Server bajo el capó. Do as SQLMenace suggests, using ALTER TABLE ... ALTER COLUMN. Because this alter increases the column size the change cannot be metadata-only, it will need to do a size-of-data operation (update every row). But this will be transparent to you (read the linked article for more details). However, you need to be prepared to handle a potentially large transaction as the update of every row progresses, so make sure you have plenty of log room so you don't run out of log space due to the large transaction.

respondido 09 nov., 11:00

No es la respuesta que estás buscando? Examinar otras preguntas etiquetadas or haz tu propia pregunta.