Hi,
I have a sql2k table with 400 Million records. All records need to be
updated (one field.).
1. What id tested was that DTS transfer(updating the field during the
transferring) to a diff table in the same db. It took 67 hours.
2. The second try I did was Select into the the same db. same thing -- 70
hours.
Is there any faster method taht I could use the accomplish this?
ThanksWith an index on a very specific field such as identity or spread-out date,
you can do this in a batch mode, something like this:
declare @.i int
set @.i = 1 (or min from your table)
get max id from table
while @.i < maxid
begin
begin tran
delete from table where id between @.i and @.i + somenumber (50K-00K')
check for error
commit tran
set @.i = @.i + somenumber
end
--
Kevin G. Boles
Indicium Resources, Inc.
SQL Server MVP
kgboles a earthlink dt net
"Mecn" <mecn@.yahoo.com> wrote in message
news:%23bek6QtXIHA.1208@.TK2MSFTNGP03.phx.gbl...
> Hi,
> I have a sql2k table with 400 Million records. All records need to be
> updated (one field.).
> 1. What id tested was that DTS transfer(updating the field during the
> transferring) to a diff table in the same db. It took 67 hours.
> 2. The second try I did was Select into the the same db. same thing -- 70
> hours.
> Is there any faster method taht I could use the accomplish this?
> Thanks
>
>|||Hi,
What you are testing is not really an update but the copy of a huge table.
Could you give more details about exactly what are you going to update, field
data type, operation.
Could you restore a copy of this database to some other place and test
running a simple UPDATE statement? Note that since this will be one
transaction you will need enough disk space for the transaction log to grow.
Hope this helps,
Ben Nevarez
"Mecn" wrote:
> Hi,
> I have a sql2k table with 400 Million records. All records need to be
> updated (one field.).
> 1. What id tested was that DTS transfer(updating the field during the
> transferring) to a diff table in the same db. It took 67 hours.
> 2. The second try I did was Select into the the same db. same thing -- 70
> hours.
> Is there any faster method taht I could use the accomplish this?
> Thanks
>
>|||Just so I can be clear , when you say UPDATE , I think you mean tarnsfer of
data (copy).
Some things you could try is:
use TABLOCK
If possible, use Simple Recovery Mode
Possibly disable indices and rebuild at the end .
These are just some ideas, but is dependant on the exact table structures,
code etc
--
Jack Vamvas
___________________________________
Search IT jobs from multiple sources- http://www.ITjobfeed.com
"Mecn" <mecn@.yahoo.com> wrote in message
news:%23bek6QtXIHA.1208@.TK2MSFTNGP03.phx.gbl...
> Hi,
> I have a sql2k table with 400 Million records. All records need to be
> updated (one field.).
> 1. What id tested was that DTS transfer(updating the field during the
> transferring) to a diff table in the same db. It took 67 hours.
> 2. The second try I did was Select into the the same db. same thing -- 70
> hours.
> Is there any faster method taht I could use the accomplish this?
> Thanks
>
>|||67hours for 400millions?
its really really slow... (around 1600rows/sec)
are you sure that you use the bulk insert option and make sure you setup the
batch size value (to 10 000 or something like this)
your disk subsystem as an impact too, but you should be able to load the
table in 1 to 2hours. (depends on the size of 1 row)
also make sure you drop the indexes before the loading process, then
recreate the indexes. (recreating the indexes will add some processing time
after the loading, so estimate it between 15min to 1h regarding the number
of indexes)
the idea is to cut the big table into small batchs.
in our developments we reach 200 000rows loaded by second.
"Mecn" <mecn@.yahoo.com> wrote in message
news:#bek6QtXIHA.1208@.TK2MSFTNGP03.phx.gbl...
> Hi,
> I have a sql2k table with 400 Million records. All records need to be
> updated (one field.).
> 1. What id tested was that DTS transfer(updating the field during the
> transferring) to a diff table in the same db. It took 67 hours.
> 2. The second try I did was Select into the the same db. same thing -- 70
> hours.
> Is there any faster method taht I could use the accomplish this?
> Thanks
>
>|||Thanks for all the responses.
1. You are right I was doing transfer instead of updating the field. I think
that uase DTS with 400,000 batch size/fast load(same as bulk insert?) to a a
non-index table or Insert into (bulk insert?) will be fast than undate the
field. -- 67 Hours....too long.
Any faster ways?
Agree?
"Jeje" <willgart@.hotmail.com> wrote in message
news:FB0540B8-0A9B-4603-AD86-01742FFC7DD7@.microsoft.com...
> 67hours for 400millions?
> its really really slow... (around 1600rows/sec)
> are you sure that you use the bulk insert option and make sure you setup
> the batch size value (to 10 000 or something like this)
> your disk subsystem as an impact too, but you should be able to load the
> table in 1 to 2hours. (depends on the size of 1 row)
> also make sure you drop the indexes before the loading process, then
> recreate the indexes. (recreating the indexes will add some processing
> time after the loading, so estimate it between 15min to 1h regarding the
> number of indexes)
> the idea is to cut the big table into small batchs.
> in our developments we reach 200 000rows loaded by second.
> "Mecn" <mecn@.yahoo.com> wrote in message
> news:#bek6QtXIHA.1208@.TK2MSFTNGP03.phx.gbl...
>> Hi,
>> I have a sql2k table with 400 Million records. All records need to be
>> updated (one field.).
>> 1. What id tested was that DTS transfer(updating the field during the
>> transferring) to a diff table in the same db. It took 67 hours.
>> 2. The second try I did was Select into the the same db. same thing --
>> 70 hours.
>> Is there any faster method taht I could use the accomplish this?
>> Thanks
>>|||The table datasize = 80GB, total rows = 375million
"Jeje" <willgart@.hotmail.com> wrote in message
news:FB0540B8-0A9B-4603-AD86-01742FFC7DD7@.microsoft.com...
> 67hours for 400millions?
> its really really slow... (around 1600rows/sec)
> are you sure that you use the bulk insert option and make sure you setup
> the batch size value (to 10 000 or something like this)
> your disk subsystem as an impact too, but you should be able to load the
> table in 1 to 2hours. (depends on the size of 1 row)
> also make sure you drop the indexes before the loading process, then
> recreate the indexes. (recreating the indexes will add some processing
> time after the loading, so estimate it between 15min to 1h regarding the
> number of indexes)
> the idea is to cut the big table into small batchs.
> in our developments we reach 200 000rows loaded by second.
> "Mecn" <mecn@.yahoo.com> wrote in message
> news:#bek6QtXIHA.1208@.TK2MSFTNGP03.phx.gbl...
>> Hi,
>> I have a sql2k table with 400 Million records. All records need to be
>> updated (one field.).
>> 1. What id tested was that DTS transfer(updating the field during the
>> transferring) to a diff table in the same db. It took 67 hours.
>> 2. The second try I did was Select into the the same db. same thing --
>> 70 hours.
>> Is there any faster method taht I could use the accomplish this?
>> Thanks
>>|||yes, fast load is the bulk insert.
but a batch of 400 000 is too large, try to reduce it (to 10 000) and see
the difference.
each batch is a transaction, if the batch is big the transaction will take
more time to be commited.
you have to check the disk, memory & cpu activity during the load to
identify the bottleneck.
also insure that the database recovery is set to "simple", and make sure the
log files are on a separate controller and disks.
and validate that the source query used is correctly optimized.
and if everything is on the same server, try to spread the source database
or files on a different disks then the targeted database / table
If you read & write on the same disks this can cause performance issues.
"Mecn" <mecn@.yahoo.com> wrote in message
news:Okn4KT2XIHA.4272@.TK2MSFTNGP05.phx.gbl...
> Thanks for all the responses.
> 1. You are right I was doing transfer instead of updating the field. I
> think that uase DTS with 400,000 batch size/fast load(same as bulk
> insert?) to a a non-index table or Insert into (bulk insert?) will be fast
> than undate the field. -- 67 Hours....too long.
> Any faster ways?
> Agree?
>
> "Jeje" <willgart@.hotmail.com> wrote in message
> news:FB0540B8-0A9B-4603-AD86-01742FFC7DD7@.microsoft.com...
>> 67hours for 400millions?
>> its really really slow... (around 1600rows/sec)
>> are you sure that you use the bulk insert option and make sure you setup
>> the batch size value (to 10 000 or something like this)
>> your disk subsystem as an impact too, but you should be able to load the
>> table in 1 to 2hours. (depends on the size of 1 row)
>> also make sure you drop the indexes before the loading process, then
>> recreate the indexes. (recreating the indexes will add some processing
>> time after the loading, so estimate it between 15min to 1h regarding the
>> number of indexes)
>> the idea is to cut the big table into small batchs.
>> in our developments we reach 200 000rows loaded by second.
>> "Mecn" <mecn@.yahoo.com> wrote in message
>> news:#bek6QtXIHA.1208@.TK2MSFTNGP03.phx.gbl...
>> Hi,
>> I have a sql2k table with 400 Million records. All records need to be
>> updated (one field.).
>> 1. What id tested was that DTS transfer(updating the field during the
>> transferring) to a diff table in the same db. It took 67 hours.
>> 2. The second try I did was Select into the the same db. same thing --
>> 70 hours.
>> Is there any faster method taht I could use the accomplish this?
>> Thanks
>>
>|||Hi
SET ROWCOUNT 10000
update_rows:
UPDATE tbakle SET ...
WHERE <condition>
IF @.@.ROWCOUNT > 0 GOTO update_rows
SET ROWCOUNT 0
"Mecn" <mecn@.yahoo.com> wrote in message
news:%23bek6QtXIHA.1208@.TK2MSFTNGP03.phx.gbl...
> Hi,
> I have a sql2k table with 400 Million records. All records need to be
> updated (one field.).
> 1. What id tested was that DTS transfer(updating the field during the
> transferring) to a diff table in the same db. It took 67 hours.
> 2. The second try I did was Select into the the same db. same thing -- 70
> hours.
> Is there any faster method taht I could use the accomplish this?
> Thanks
>
>
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment