Large Query immediately eats up all available memory in Server 2003 Server
XP1 SQL 2005 64 bit (8 gig ram). CPU pegs at 99% and nothing else appears to
be able to access SQL Server. Query is attempting to delete possibly as many
as 200 million duplicate entries in a table. Do settings exist in SQL to
limit the amount of processing power any one query can tap into from the
server?
Regards,
Jamie
Hi
Why do you try to delete 200 million rows at one transaction?
Can you divide it to small tranasctions and then perfom deletion?
SET ROWCOUNT 1000
WHILE 1 = 1
BEGIN
DELETE statement Here
IF @.@.ROWCOUNT = 0
BEGIN
BREAK
END
ELSE
BEGIN
CHECKPOINT --or even BACKUP LOG file
END
END
SET ROWCOUNT 0
"thejamie" <thejamie@.discussions.microsoft.com> wrote in message
news:B566306D-F6DC-4CB5-B85A-E0978B158495@.microsoft.com...
> Large Query immediately eats up all available memory in Server 2003 Server
> XP1 SQL 2005 64 bit (8 gig ram). CPU pegs at 99% and nothing else appears
> to
> be able to access SQL Server. Query is attempting to delete possibly as
> many
> as 200 million duplicate entries in a table. Do settings exist in SQL to
> limit the amount of processing power any one query can tap into from the
> server?
> --
> Regards,
> Jamie
|||First i would run this query when nobody is connected to avoid locks etc..
Second if possible split the query in steps instead of doing all deletes in
1 time.
Example...do 1000 deletes atime and so on.
You can set the minimal query memory at the server level, i have no
experience with changing this option .
Good luck.
DBA4ever
"thejamie" wrote:
> Large Query immediately eats up all available memory in Server 2003 Server
> XP1 SQL 2005 64 bit (8 gig ram). CPU pegs at 99% and nothing else appears to
> be able to access SQL Server. Query is attempting to delete possibly as many
> as 200 million duplicate entries in a table. Do settings exist in SQL to
> limit the amount of processing power any one query can tap into from the
> server?
> --
> Regards,
> Jamie
|||Can I point this in another directions? There is something called AWE that
is used to allocate memory and it appears that min and max memory can be
set... in the memory section of the server properties. How will this effect
such a query.
I should have added that this is the only query running on the server and it
is simply a test of what happens under severe conditions. The test indicates
to me that the server memory should be tempered. After the query ran 8
hours, I canceled it. It locked up the server last night and we had to do a
cold boot. In my opinion, despite the recklessness of the delete statement,
a cold boot should rarely if ever be required as the solution to a SQL
problem arising from a mis-directed query.
Regards,
Jamie
"Hate_orphaned_users" wrote:
[vbcol=seagreen]
> First i would run this query when nobody is connected to avoid locks etc..
> Second if possible split the query in steps instead of doing all deletes in
> 1 time.
> Example...do 1000 deletes atime and so on.
> You can set the minimal query memory at the server level, i have no
> experience with changing this option .
> Good luck.
> DBA4ever
> "thejamie" wrote:
|||On Thu, 7 Dec 2006 05:20:01 -0800, thejamie
<thejamie@.discussions.microsoft.com> wrote:
>There is something called AWE that
>is used to allocate memory
AWE is for 32-bit, it does not apply to 64-bit SQL Server.
Roy Harvey
Beacon Falls, CT
|||On Thu, 7 Dec 2006 05:20:01 -0800, thejamie
<thejamie@.discussions.microsoft.com> wrote:
> After the query ran 8
>hours, I canceled it. It locked up the server last night and we had to do a
>cold boot. In my opinion, despite the recklessness of the delete statement,
>a cold boot should rarely if ever be required as the solution to a SQL
>problem arising from a mis-directed query.
The normal run time for a query that deletes 200 million rows may very
well exceed 8 hours. When such a query is cancelled while running, it
can take longer to toll back the work already done that it took to do
it in the first place, so 12 hours rolling back 8 hours of processing
would not be unexpected.
Roy Harvey
Beacon Falls, CT
|||You should select the "Dynamically configure SQL server memory" option.
Then set the Maximum option to full.
|||> Thanks for that information, Dan. That's what I get for going by the
> documentation! 8-)
I understand, Roy. I was under the same impression until I ran across
Slova's blog.
Hope this helps.
Dan Guzman
SQL Server MVP
"Roy Harvey" <roy_harvey@.snet.net> wrote in message
news:g97gn2hckaq1h4um92i784tc7kpku4vitk@.4ax.com...
> On Thu, 7 Dec 2006 07:45:20 -0600, "Dan Guzman"
> <guzmanda@.nospam-online.sbcglobal.net> wrote:
>
> Thanks for that information, Dan. That's what I get for going by the
> documentation! 8-)
> "3 Note that the sp_configure awe enabled option is present on 64-bit
> SQL Server, but it is ignored. It is subject to removal in future
> releases or service packs of 64-bit SQL Server."
> Roy
|||How about the lightweight pooling with Windows fibers? Should Boost SQL
Server priority be checked or unchecked? Should worker threads be set for 0?
Regards,
Jamie
"Roy Harvey" wrote:
> On Thu, 7 Dec 2006 05:20:01 -0800, thejamie
> <thejamie@.discussions.microsoft.com> wrote:
>
> AWE is for 32-bit, it does not apply to 64-bit SQL Server.
> Roy Harvey
> Beacon Falls, CT
>
|||The service was restarted and this had no effect on the memory and cpu
usage... the cold boot came next, now the cpu usage and memory are maxed out
again.
Regards,
Jamie
"Dan Guzman" wrote:
> I understand, Roy. I was under the same impression until I ran across
> Slova's blog.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Roy Harvey" <roy_harvey@.snet.net> wrote in message
> news:g97gn2hckaq1h4um92i784tc7kpku4vitk@.4ax.com...
>
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment