Thursday, March 29, 2012

A few BLOBs per page

Does SQL server 2005 places a few VARBINARY(MAX) values on a single page if
the length of those values are let's say 2KB?
Message posted via droptable.com
http://www.droptable.com/Uwe/Forums...erver/200512/1
Hi Alex
Varbinary(max) data will actually be placed in the data row itself if there
is room.
You can set the table property to store all large objects out of the row,
and then varbinary(max) is treated just like image.
Image columns from the same table CAN share space on the same pages for
greater storage space efficiency.
HTH
Kalen Delaney, SQL Server MVP
www.solidqualitylearning.com
"Alex via droptable.com" <no@.spam.pls> wrote in message
news:589565dfa15f3@.uwe...
> Does SQL server 2005 places a few VARBINARY(MAX) values on a single page
> if
> the length of those values are let's say 2KB?
> --
> Message posted via droptable.com
> http://www.droptable.com/Uwe/Forums...erver/200512/1
>
|||Thanks a lot for your response.
Message posted via droptable.com
http://www.droptable.com/Uwe/Forums...erver/200512/1
sql

Tuesday, March 27, 2012

A few BLOBs per page

Does SQL server 2005 places a few VARBINARY(MAX) values on a single page if
the length of those values are let's say 2KB?
--
Message posted via SQLMonster.com
http://www.sqlmonster.com/Uwe/Forums.aspx/sql-server/200512/1Hi Alex
Varbinary(max) data will actually be placed in the data row itself if there
is room.
You can set the table property to store all large objects out of the row,
and then varbinary(max) is treated just like image.
Image columns from the same table CAN share space on the same pages for
greater storage space efficiency.
--
HTH
Kalen Delaney, SQL Server MVP
www.solidqualitylearning.com
"Alex via SQLMonster.com" <no@.spam.pls> wrote in message
news:589565dfa15f3@.uwe...
> Does SQL server 2005 places a few VARBINARY(MAX) values on a single page
> if
> the length of those values are let's say 2KB?
> --
> Message posted via SQLMonster.com
> http://www.sqlmonster.com/Uwe/Forums.aspx/sql-server/200512/1
>|||Thanks a lot for your response.
--
Message posted via SQLMonster.com
http://www.sqlmonster.com/Uwe/Forums.aspx/sql-server/200512/1

A few BLOBs per page

Does SQL server 2005 places a few VARBINARY(MAX) values on a single page if
the length of those values are let's say 2KB?
Message posted via droptable.com
http://www.droptable.com/Uwe/Forum...server/200512/1Hi Alex
Varbinary(max) data will actually be placed in the data row itself if there
is room.
You can set the table property to store all large objects out of the row,
and then varbinary(max) is treated just like image.
Image columns from the same table CAN share space on the same pages for
greater storage space efficiency.
HTH
Kalen Delaney, SQL Server MVP
www.solidqualitylearning.com
"Alex via droptable.com" <no@.spam.pls> wrote in message
news:589565dfa15f3@.uwe...
> Does SQL server 2005 places a few VARBINARY(MAX) values on a single page
> if
> the length of those values are let's say 2KB?
> --
> Message posted via droptable.com
> http://www.droptable.com/Uwe/Forum...server/200512/1
>|||Thanks a lot for your response.
Message posted via droptable.com
http://www.droptable.com/Uwe/Forum...server/200512/1

A feature available in oracle, is it available in sql server?

Theres a feature in oracle that allows you to modify tables, colums, values and the data from its enterprise console the same way that you can in sql server. In oracle however theres a button called 'show sql' that allows you to see and copy/paste the resulting sql for the changes made via the console.

I would imagine that sql server has a similar option. The reason i ask is that i would like to more fully learn how to do this through the query analyser and get more familiar with sql involved and I would be able to do this if I could see the resulting sql from enterprise manager.

Hope this makes sense.

I did find something in sql server called 'generate sql' but this doesnt update during changes you make automatically.

Thanksyeah if you have downloaded ther BOL ( which you should) look for ALTER TABLE key word. you can add columns, drop 'em modify 'em etc.

hth|||Thanks for the reply. But, whats the BOL. And what are you talking about?!>?! This doesnt answer my question. I'm talking about the ability to see the resulting sql when modifying it in the enterprise console.|||What you are looking for is called "Save Change Script".

If you are in Enterprise Manager, right click on a table name, and choose "Design". This will take you into the Design Table interface. If you hover over the 3rd icon from the left you will see that it says "save change script" (note that you actually have to make a change in order for this to become active). If you click on this you will see the exact commands that EM is going to execute to accomomdate your changes, and you can opt to save them to disk.

Also, BOL is Books Online, an invaluable free SQL Server reference from Microsoft. It is a huge download but well worth it. You can find it here:SQL Server 2000 Books Online (Updated 2004).

Terri

a fcuntion to compare two tables

I need a function witch compares two tables.
can some one help me ?Hi
SELECT OneTable.*, TwoTable.*
FROM OneTable
FULL OUTER JOIN
TwoTable
ON OneTable.c1 = TwoTable.c1
AND OneTable.c2 = TwoTable.c2
...
AND OneTable.cn = TwoTable.cn
WHERE OneTable.key IS NULL
OR TwoTable.key IS NULL;
"olli_d" <info@.dithmer.de> wrote in message
news:1193241328.829023.299090@.y27g2000pre.googlegroups.com...
>I need a function witch compares two tables.
> can some one help me ?
>

a fatal error of sqlserver2005

I find many errors these days in my sqlserver 2005. description is :

SQL Server is terminating because of fatal exception c0000005. This error may be caused by an unhandled Win32 or C++ exception, or by an access violation encountered during exception handling. Check the SQL error log for any related stack dumps or messages. This exception forces SQL Server to shutdown. To recover from this error, restart the server (unless SQLAgent is configured to auto restart).

SQL Server detected a logical consistency-based I/O error: incorrect checksum (expected: 0xfab258f6; actual: 0x7ab258fb). It occurred during a read of page (1:387359) in database ID 20 at offset 0x000000bd23e000 in file 'F:\SQL2005_Data\UKDB.mdf'. Additional messages in the SQL Server error log or system event log may provide more detail. This is a severe error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.

when I run dbcc checkdb, it report "Msg 8967, Level 16, State 216, Line 1
An internal error occurred in DBCC which prevented further processing. Please contact Product Support."

sqlserver is :

Microsoft SQL Server 2005 - 9.00.1399.06 (Intel X86) Oct 14 2005 00:33:37 Copyright (c) 1988-2005 Microsoft Corporation Enterprise Edition on Windows NT 5.2 (Build 3790: Service Pack 1)

OS is : Win2003 Server Standard edition with sp1

hardware is :

two hardisk with RAID 1 on Silicon image 3114 RAID controller

It sounds like you have a corrupt database.
Some types of corruption result in severe errors.

As a first step, you should consider restoring this database from a good backup.

Be sure to check your system event log to see if any hardware errors are being logged.

The invalid checksum indicates that the storage of the database has been changed after we last wrote it. Flakey hardware is the main reason that checksums were added in sql2005.

If this problem persists, let us know.

Also, let me know if you'd like me to use the contact information in your forum profile to communicate with you offline.

|||

I find my memory has problem. It cause the sql error, thank you for your help.

sql

a faster way to update a big table

hi
well I am working on a project that our database has 2, 8000 record table that they have triggers on the update of their feilds
our application must update these tables but it takes a long time to do this
I know there is something named bulk insert but I couldn't find sth similar to this command for update
so would you please help me to find a faster way to update these tables?
thanks for your attention
Best Regards
EggHeadCafe.com - .NET Developer Portal of Choice
http://www.eggheadcafe.com
hi,
netman Mo wrote:
> hi
> well I am working on a project that our database has 2, 8000 record
> table that they have triggers on the update of their feilds our
> application must update these tables but it takes a long time to do
> this
> I know there is something named bulk insert but I couldn't find sth
> similar to this command for update
> so would you please help me to find a faster way to update these
> tables?
nope.. update syntax is not overloaded with bulk operators..
if the cause of your delay is dependent on the trigger fired by the update
statement, you should perhaps check it's code... or... if you are sure the
updates you are performing do not involve the trigger check, you can disable
it before executing the statements..
Andrea Montanari (Microsoft MVP - SQL Server)
http://www.asql.bizhttp://italy.mvps.org
DbaMgr2k ver 0.20.0 - DbaMgr ver 0.64.0 and further SQL Tools
-- remove DMO to reply

A error in partition table ,could you tell me ?

1 HIS_HTTP_LOG a partition table
2 REL_HTTP_LOG not a partition table,the same structure of HIS_HTTP_LOG;
3 When HIS_HTTP_LOG doesn't exist any index
the following executed succeed

ALTER PARTITION SCHEME PS_HIS_HTTP_LOG NEXT USED [FG_03]
ALTER PARTITION FUNCTION PF_HIS_HTTP_LOG() SPLIT RANGE ('20070331 23:59:59.997')
ALTER TABLE TMP_HTTP_LOG SWITCH TO HIS_HTTP_LOG PARTITION 3

4 However when I added the index in HIS_HTTP_LOG and execute the step 3,It made error:
a) CREATE INDEX IDX_HIS_HTTP_LOG_001 ON HIS_HTTP_LOG(USERID)ON PS_HIS_HTTP_LOG (STARTIME)
b) ALTER PARTITION SCHEME PS_HIS_HTTP_LOG NEXT USED [FG_03]
ALTER PARTITION FUNCTION PF_HIS_HTTP_LOG() SPLIT RANGE ('20070331 23:59:59.997')
ALTER TABLE TMP_HTTP_LOG SWITCH TO HIS_HTTP_LOG PARTITION 3


========================= Error messages================================================
"ALTER TABLE SWITCH statement failed. There is no identical index in source table 'TMP_HTTP_LOG SWITCH ' for the index 'IDX_HIS_HTTP_LOG_001' in target table 'HIS_HTTP_LOG' ."

When I added index in REL_HTTP_LOG ,it gave me the same error message

Could you tell me how can I solve the problem !

The error says you need to create anidentical index on?TMP_HTTP_LOG SWITCH,?as?'IDX_HIS_HTTP_LOG_001' in target table 'HIS_HTTP_LOG' .?So?create?such?an?index?and?try?again.

When?use?ALTER?TABLE?SWITCH?to?transfer?schema, there is no physical data movement, only metadata change, the partitions and tables involved in the switching are required to be homogeneous. They must have the same columns of the same data type, name, order, and collation on the same filegroup.

A duplicate value cannot be inserted into a unique index

I'll first give the caveat that I've been away from this project for many
weeks (but at least it is my own creation). Having said that, I'm not sure
if this is a replication problem or exactly what I have on my hands here.
When I submit an Insert to a particular table, I can submit as many Inserts
as I would like with no exceptions -- until I replicate my changes back to
the server, then download that table again. It would appear that I have
identical data in both the SQLCE table and the SQL2k table, but for a reason
that I have yet to figure out, the downloaded table will no longer accept
Inserts. I get the message: "A duplicate value cannot be inserted into a
unique index. [,,,,,].
I do not have indexes on any of the columns. I do have a primary key. Any
advice would be appreciated.
It means you are trying to insert identical values into your PK.
"Earl" <brikshoe@.newsgroups.nospam> wrote in message
news:uXndNKDNFHA.2020@.TK2MSFTNGP10.phx.gbl...
> I'll first give the caveat that I've been away from this project for many
> weeks (but at least it is my own creation). Having said that, I'm not sure
> if this is a replication problem or exactly what I have on my hands here.
> When I submit an Insert to a particular table, I can submit as many
> Inserts as I would like with no exceptions -- until I replicate my changes
> back to the server, then download that table again. It would appear that I
> have identical data in both the SQLCE table and the SQL2k table, but for a
> reason that I have yet to figure out, the downloaded table will no longer
> accept Inserts. I get the message: "A duplicate value cannot be inserted
> into a unique index. [,,,,,].
> I do not have indexes on any of the columns. I do have a primary key. Any
> advice would be appreciated.
>
|||Nope. Identical values into unique indexed columns. I just dropped the
indexes for now.
"ChrisR" <noemail@.bla.com> wrote in message
news:uFug$fHNFHA.576@.TK2MSFTNGP15.phx.gbl...
> It means you are trying to insert identical values into your PK.
> "Earl" <brikshoe@.newsgroups.nospam> wrote in message
> news:uXndNKDNFHA.2020@.TK2MSFTNGP10.phx.gbl...
>

A doubt about this: "Could not find stored Procedure"

Hi Everybody!

I have a problem in my SQL Server 2000 SP1 Database Called "Embossamento". When I run the following comand in the Query Analyzer, I have this error message :

command: exec dbcc_all_dbreindex
message: Could not find stored procedure 'dbcc_all_dbreindex'.

Otherwise, in the same server, but in another Database called "Autorizacao" I execute this dbcc_all_dbreindex with no problems.
Does anybody know why this happens?
I will be waiting for some reply, ok?
Thanks,This must be a custom stored procedure that uses the dbcc dbreindex command - check out the database, Autorizacao, and look for your stored procedures under that database.|||Ok! Thanks!
It worked! I just realized that my database did not have the procedure dbcc_all_dbreindex.
Now it's ok...

A DOS question! (about aspnet_regsql.exe with paramaters)

I need to setup some asp security databases and I have seen several sets of instructions. Some say "Navigate to C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727" and some say at the "asp command box". I do know that the directory & file exists on my machine. I have run aspnet_regsql.exe to configure my local server. I now need to run it to configure my web host SQL Server 2005 and that requires some paramaters like "aspnet_regsql.exe -S [DB Server Name] -U [DB login] -P [Password] -A all -d [Database name]". I am old and started before windows. I know DOS. But obviously not well enough.

How do you "Navigate to C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727"?

If I say "CD\WINDOWS", it works.

When I am at the "C:\WINDOWS>" prompt and say "CD Microsoft.NET" it says "invalid directory"

When I am at the "C:\>" prompt and say "CD\WINDOWS\Microsoft.NET" it says "invalid directory"

When I am at the "C:\>" prompt and say "CD\WINDOWS\Micros~1" it says "invalid directory"

Thanks for your help. John Brown

Not much traffic on a DOS question! I figured out a work around.

I copied aspnet_regsql.exe to a directory with a short name (C:\trash) and then ran it from the command prompt. It worked!

John Brown

sql

A domain error occurred. from SQL statement, only when above 10

Attempting to run a query that is looping through cursor, anything over
choosing the TOP 10 from a table for the cursor results in a
"A domain error occurred."
error message? below 10 all runs fine. What is this message caused by?It usually happens when performing mathematical functions
and using values outside of acceptable ranges.
-Sue
On Thu, 20 Nov 2003 11:51:01 -0600, "Kory"
<kory@.removeme-mlsc.com> wrote:
>Attempting to run a query that is looping through cursor, anything over
>choosing the TOP 10 from a table for the cursor results in a
>"A domain error occurred."
>error message? below 10 all runs fine. What is this message caused by?
>

a Distinct Query

I have 2 tables as following :

tbl_Articles
3 ArticleID int
0 AuthorID int
0 ArticleTitle
0 ArticleText
0 ArticleDate

tbl_Authors
3 AuthorID
0 AuthorFullName
0 AuthorEmail
0 AuthorDescription
0 AuthorImage

I want to write a query to see the Authors and their last articles with no distinct values.
Like AuthorImage - AuthorFullName - ArticleTitle - ArticleDate

If anyone knows the solution i will be glad .
Thanks from now onselect AuthorImage, AuthorFullName, ArticleTitle, ArticleDate = aDate
from tbl_Authors a
inner join (
select AuthorID, aDate = max(ArticleDate)
from tbl_Articles) x
on a.AuthorID = x.AuthorID
inner join tbl_Articles b
on x.AuthorID = b.AuthorID
and x.aDate = b.ArticleDate|||another version:select AuthorImage
, AuthorFullName
, ArticleTitle
, ArticleDate
from tbl_Authors AUTH
inner
join tbl_Articles ART
on AUTH.AuthorID
= ART.AuthorID
where ART.ArticleDate
= ( select max(ArticleDate)
from tbl_Articles
where AuthorID
= AUTH.AuthorID )

A Dimension Attribute with Double DataType converts a zero member to "(blank)"

In one of our SSAS 2005 sp2 cubes, we have a Dimension sourced from an Oracle table, with an Attribute that has a DataType of Double. When we process the Dimension and browse the Attribute, we see the following members:

All

-.01

-.002

.00000000001

.0000000001

Unknown

Notice how its blank where the '0' (zero) member should be. The underlying Oracle table does have '0' (zero) values for this column (you can see them when you explore the data in the DSV). To make things even more confusing,

the Unique_Name for this blank member is: [Dimension].[Attribute].&[0]

its Member_Value is: 0

its Member_Caption is: Null

its Member_Key is: 0

Has anyone seen this behaviour before? If so, how do we fix it?

Thank you.

Hi,

Have you set the NameColumn to look at the same field? And is it a datatype of WChar?

Matt|||

Hi Matt,

Thanks for the reply. Yes, setting the NameColumn to the field was one of the things i tried. The database field is NUMERIC(15,9), the KeyColumn is a datatype of Double and the NameColumn is a datatype of WChar. Even with the NameColumn set to the same field, I still get a blank where the 0 value should be.

-Robbie

A difficult Combining Rows problem

Greetings,
I'm working to combine rows based on a time window and I am hoping to
be able to write a stored procedure to do this for me, rather than have
parse through all this data in my program. I'm not very well versed
with T-SQL syntax.. just enough to get by selecting using inner joins,
updating and inserting... thats about it. (Hence why I am here.)
The raw data I have below looks like this:
groupID, StartTime, EndTime, Min, Max, Points
----
1, 2005-10-05 06:00, 2005-10-05 06:14:59, 7, 32, 13
1, 2005-10-05 06:15, 2005-10-05 06:29:59, 5, 29, 6
1, 2005-10-05 06:30, 2005-10-05 06:44:59, 5, 28, 4
1, 2005-10-05 06:45, 2005-10-05 06:59:59, 5, 29, 16
1, 2005-10-05 07:00, 2005-10-05 07:14:59, 5, 23, 13
1, 2005-10-05 07:15, 2005-10-05 07:29:59, 5, 25, 18
1, 2005-10-05 07:30, 2005-10-05 07:44:59, 5, 34, 49
1, 2005-10-05 07:45, 2005-10-05 07:59:59, 5, 31, 49
Pretty straight forward; you can see each entry is a 15 minute time
interval. What I want to be able to do is to use a view or a stored
procedure to view this in one hour chunks, like below:
groupID, StartTime, EndTime, Min, Max, Points
----
1, 2005-10-05 06:00, 2005-10-05 06:59:59, 5, 32, 39
1, 2005-10-05 07:00, 2005-10-05 07:59:59, 5, 34, 129
This involves several things:
- Recognizing that there are variable # of rows (maybe we only have 3
15 minute entries instead of 4)
- Getting a min of those row's min column
- Getting a max of those row's max column
- Getting a total for those row's points column
- Input to any view or whatver would be based on the startTime and
endTime and would always be in whole hours.
I have a feeling that I am going to be doing this all in the C# .NET
end of things, but it's at least worth a shot asking all of you SQL
experts. What I am basically interested in knowing is, do you all
think that this is possible using views or stored procedures or
something else I don't know about. I didn't even know about views
until i started researching how to do this.
Any ideas? Is this possible? Should I just give up and do it on the
C# end of things? Seems to me that it might be possible to do in a
stored procedure, but possible not worth my time. I aprpeciate any
help or suggestions.
JasonTry this:
SELECT groupid,
MIN(DATEADD(HH,DATEDIFF(HH,'20050101',st
arttime),'20050101')),
MIN(DATEADD(HH,DATEDIFF(HH,'20050101',st
arttime),'2005-01-01T00:59:59')),
MIN(min), MAX(max), SUM(points)
FROM tbl
GROUP BY groupid, DATEDIFF(HH,'20050101',starttime) ;
David Portas
SQL Server MVP
--|||Hi
Check out the dateadd/datepart functions in Books Online for rounding times.
Try:
SELECT GROUPID, DATEADD(mi,-DATEPART(mi,Starttime),Starttime) AS StartTime,
DATEADD(ms,-3,DATEADD(hh,1,DATEADD(mi,-DATEPART(mi,Starttime),Starttime)))
AS EndTime,
Min([Min]), Max([Max]), SUM([Points])
FROM Readings
GROUP BY GroupId,
DATEADD(mi,-DATEPART(mi,Starttime),Starttime),
DATEADD(ms,-3,DATEADD(hh,1,DATEADD(mi,-DATEPART(mi,Starttime),Starttime)))
John
"Factor" wrote:

> Greetings,
> I'm working to combine rows based on a time window and I am hoping to
> be able to write a stored procedure to do this for me, rather than have
> parse through all this data in my program. I'm not very well versed
> with T-SQL syntax.. just enough to get by selecting using inner joins,
> updating and inserting... thats about it. (Hence why I am here.)
> The raw data I have below looks like this:
> groupID, StartTime, EndTime, Min, Max, Points
> ----
> 1, 2005-10-05 06:00, 2005-10-05 06:14:59, 7, 32, 13
> 1, 2005-10-05 06:15, 2005-10-05 06:29:59, 5, 29, 6
> 1, 2005-10-05 06:30, 2005-10-05 06:44:59, 5, 28, 4
> 1, 2005-10-05 06:45, 2005-10-05 06:59:59, 5, 29, 16
> 1, 2005-10-05 07:00, 2005-10-05 07:14:59, 5, 23, 13
> 1, 2005-10-05 07:15, 2005-10-05 07:29:59, 5, 25, 18
> 1, 2005-10-05 07:30, 2005-10-05 07:44:59, 5, 34, 49
> 1, 2005-10-05 07:45, 2005-10-05 07:59:59, 5, 31, 49
> Pretty straight forward; you can see each entry is a 15 minute time
> interval. What I want to be able to do is to use a view or a stored
> procedure to view this in one hour chunks, like below:
> groupID, StartTime, EndTime, Min, Max, Points
> ----
> 1, 2005-10-05 06:00, 2005-10-05 06:59:59, 5, 32, 39
> 1, 2005-10-05 07:00, 2005-10-05 07:59:59, 5, 34, 129
> This involves several things:
> - Recognizing that there are variable # of rows (maybe we only have 3
> 15 minute entries instead of 4)
> - Getting a min of those row's min column
> - Getting a max of those row's max column
> - Getting a total for those row's points column
> - Input to any view or whatver would be based on the startTime and
> endTime and would always be in whole hours.
> I have a feeling that I am going to be doing this all in the C# .NET
> end of things, but it's at least worth a shot asking all of you SQL
> experts. What I am basically interested in knowing is, do you all
> think that this is possible using views or stored procedures or
> something else I don't know about. I didn't even know about views
> until i started researching how to do this.
> Any ideas? Is this possible? Should I just give up and do it on the
> C# end of things? Seems to me that it might be possible to do in a
> stored procedure, but possible not worth my time. I aprpeciate any
> help or suggestions.
> Jason
>|||John Bell and David Portas,
I will have to read up on these Dateadd/DatePart parameters an actually
interpret what is going on within these statements, but just from what
you gave me here it looks like this will work out very well, and I
really appreciate the insight. This will allow me to vary that time
window fairly easily I do believe, all on a SQL call (that's much
better than bringing back all the data and parsing through it all it.
Thanks again,
Jason|||John
I have read over those functions and I now understand what they do and
how to use them, but I am still as to why the min / max /
total fields actually work. I assume it has something to do with the
GROUP BY statements, but again, I don't know why.
Assuming black magic happens and thats just how it works, I should just
be able to change those hh,1 to hh,4 and get 4 hour increments instead.
When I do that, the Starttime and Endtime values do return correctly
(although I do get an entry for 8-12, 9-1, 10-2, etc... thats fine) but
the MIN/MAX/SUM stuff is still reflective of the 1 hour timing.. so
that black magic that is limiting the MIN/MAX/SUM to one hour is still
limiting them to one hour even with the altered start and end times.
I'm unsure how to fix or get around this because I don't yet understand
what is limiting that max to an hour in the first place. How does this
work? I've been tripped up GROUP BY things before, it's my kryptonite
for some reason.
Hope that is not too confusing, I'm all jumbled in my head.
I really apprecaite the help with this so far, you've all been
wonderful.
Jason|||John
I have read over those functions and I now understand what they do and
how to use them, but I am still as to why the min / max /
total fields actually work. I assume it has something to do with the
GROUP BY statements, but again, I don't know why.
Assuming black magic happens and thats just how it works, I should just
be able to change those hh,1 to hh,4 and get 4 hour increments instead.
When I do that, the Starttime and Endtime values do return correctly
(although I do get an entry for 8-12, 9-1, 10-2, etc... thats fine) but
the MIN/MAX/SUM stuff is still reflective of the 1 hour timing.. so
that black magic that is limiting the MIN/MAX/SUM to one hour is still
limiting them to one hour even with the altered start and end times.
I'm unsure how to fix or get around this because I don't yet understand
what is limiting that max to an hour in the first place. How does this
work? I've been tripped up GROUP BY things before, it's my kryptonite
for some reason.
Hope that is not too confusing, I'm all jumbled in my head.
I really apprecaite the help with this so far, you've all been
wonderful.
Jason|||On 10 Nov 2005 11:41:54 -0800, Factor wrote:

>John
>I have read over those functions and I now understand what they do and
>how to use them, but I am still as to why the min / max /
>total fields actually work. I assume it has something to do with the
>GROUP BY statements, but again, I don't know why.
Hi Jason,
Correct. The GROUP BY tells SQL Server to combine the data from several
rows into one row. This is normally used to report totals, minimum,
maximum per project, per section, etc. But with the appropriate
expression, it cal also be used to combine rows that fit in the same
"period" into one group.
Though John's and David's versions both work, I suggest you go with
Davids version, as this is more flexible. (And, once you get your head
around it, easier to understand as well).
Basically, John's version works by taking each of the date parts you
want to disregard (milliseconds, seconds, minutes), then subtracting
that amount of time from the Starttime. The end result will of course be
the last full hour equal to or before Starttime.
David's version works the other way around - it calculates the number of
full hours that have elapsed since a chosen anchor date, then adds that
number to the chosen anchor date. The result will be the same as John's
expression.
(Note: David chose to just use the number of hours for the group by, and
add it back to the anchor date in the SELECT clause only)

>Assuming black magic happens and thats just how it works, I should just
>be able to change those hh,1 to hh,4 and get 4 hour increments instead.
No. I'll give you two examples how to modify David's query to report on
4-hour intervals and to report on 1/2-hour intervals.
For 4-hour intervals, again calculate the number of hours since an
anchor date. Divide by 4 and truncate, then multiply by 4 again. Add
this number of hours to the anchor date. There you have the start of the
last 4-hour interval
SELECT groupid,
MIN(DATEADD(hour,
4 * (DATEDIFF(hour, '20050101', Starttime) / 4),
'20050101')),
MIN(DATEADD(hour,
4 * (DATEDIFF(hour, '20050101', Starttime) / 4),
'2005-01-01T03:59:59')),
MIN(min), MAX(max), SUM(points)
FROM tbl
GROUP BY groupid, DATEDIFF(hour, '20050101', Starttime) / 4;
For 1/2-hour intervals, we can't divide the number of hours sice the
anchor date by 0.5, as that won't give us back the precision we already
lost. Instead, we'll have to calculate minutes and divide by 30:
SELECT groupid,
MIN(DATEADD(minute,
30 * (DATEDIFF(minute, '20050101', Starttime) /30),
'20050101')),
MIN(DATEADD(minute,
30 * (DATEDIFF(minute, '20050101', Starttime) /30),
'2005-01-01T00:29:59')),
MIN(min), MAX(max), SUM(points)
FROM tbl
GROUP BY groupid, DATEDIFF(minute, '20050101', Starttime) /30;
In both cases, don't forget to change the shifted anchor value in the
expression for the end point of the interval. Instead of using the same
anchor date, then adding 30 minunte or 4 hours minus one second, the
anchor date is shifted by 30 minutes or 4 hours minus one second.
Now, the above code can still be simplified further. If your table
always has the complete data (as your sample roiws indicate), then you
could change the above queries to:
SELECT groupid,
MIN(StartTime), MAX(EndTime),
MIN(min), MAX(max), SUM(points)
FROM tbl
GROUP BY groupid, DATEDIFF(minute, '20050101', Starttime) /30;
-- or: GROUP BY groupid, DATEDIFF(hour, '20050101', Starttime) / 4;
Note that this might show "holes" in the periods if your real data is
not as complete as the sample you posted indicates. But the advantage is
that you get rid of the "shifted" anchor date for calculating end time.
Final step would be to put it in a stored procedure and use a parameter
for the interval length (in minutes):
SELECT groupid,
MIN(StartTime), MAX(EndTime),
MIN(min), MAX(max), SUM(points)
FROM tbl
GROUP BY groupid, DATEDIFF(minute, '20050101', Starttime) / @.Interval;
Best, Hugo
--
(Remove _NO_ and _SPAM_ to get my e-mail address)|||Hi Jason,
What David Provided is an Excellent query .
Let us see if this query can help you.
Select GID , Min(STime) , Max(ETime) ,
Min(Minimum),Max(Maximum),Sum(Points)[co
lor=darkred]
>From yourTableName Group By[/color]
GID,Convert(varchar,STime,112),DatePart(
hh,STime)
Having same name as Functions/ Keyword sound confusing to me so I
changed them.
With Warm Regards
Jatinder Singh|||Hi
This is easier with David's method (see Hugo's reply for an explanation).
Dividing the number of hours by 4 and dropping the remainder will give you 4
hour chunks when they are multiplied back up. You also need to change the en
d
time to give a 4 hour gap.
SELECT groupid,
MIN(DATEADD(HH,
4*(DATEDIFF(HH,'20050101',starttime)/4),'20050101')
) AS Starttime,
MAX(DATEADD(HH,
4*(DATEDIFF(HH,'20050101',starttime)/4),'2005-01-01T03:59:59')
) AS Endtime,
MIN(min) AS [Min],
MAX(max) AS [Max],
SUM(points) AS [Total Points]
FROM Readings
GROUP BY groupid,
4*(DATEDIFF(HH,'20050101',starttime)/4)
John
"Factor" wrote:

> John
> I have read over those functions and I now understand what they do and
> how to use them, but I am still as to why the min / max /
> total fields actually work. I assume it has something to do with the
> GROUP BY statements, but again, I don't know why.
> Assuming black magic happens and thats just how it works, I should just
> be able to change those hh,1 to hh,4 and get 4 hour increments instead.
> When I do that, the Starttime and Endtime values do return correctly
> (although I do get an entry for 8-12, 9-1, 10-2, etc... thats fine) but
> the MIN/MAX/SUM stuff is still reflective of the 1 hour timing.. so
> that black magic that is limiting the MIN/MAX/SUM to one hour is still
> limiting them to one hour even with the altered start and end times.
> I'm unsure how to fix or get around this because I don't yet understand
> what is limiting that max to an hour in the first place. How does this
> work? I've been tripped up GROUP BY things before, it's my kryptonite
> for some reason.
> Hope that is not too confusing, I'm all jumbled in my head.
> I really apprecaite the help with this so far, you've all been
> wonderful.
> Jason
>|||Wondeful! Lots ot take in, I thank everyone for their help. I've made
a lot of progress and I've learned a TON about SQL int he past two
days.
I hope I can help you all in the future with something!
Thanks again,
Jason

A differential backup that seems to darn big...

I have a db that is currently about 7 gb. It is currently on a test
instance of SQL 2005. It is not being used by anyone but me for the
purpose of learning one or two things about 2005. Here is what I deem
odd:
If i do a full backup of the database, I get a backup of 7 gb.
If I then IMMEDIATELY do a differential backup, I get a backup of 4.7
gb.
If I then IMMEDIATELY do a differential backup, I get a backup of 4.7
gb.
If I then IMMEDIATELY do a differential backup, I get a backup of 4.7
gb.
I was started to see a pattern. I can guarantee there are no other
users and that I didn't change the database in between backups.
tia,
SteveHi
From BOL:
"A differential backup is based on the most recent, previous full backup of
the data that is included in the differential backup. A differential backup
captures only the data that has changed since that full backup. This is know
n
as the base of the differential. A differential backup includes only the dat
a
that have changed since the differential base. "
If you don't do another full backup between the differential backups they
will only get bigger if someone changes the data, and will stay the same if
they don't.
You don't say how old your base is, but the size of the differential backups
indicates a significant amount of changes. Have you re-indexes or shrunk the
files since the full backup?
John
"Not the Face" wrote:

> I have a db that is currently about 7 gb. It is currently on a test
> instance of SQL 2005. It is not being used by anyone but me for the
> purpose of learning one or two things about 2005. Here is what I deem
> odd:
> If i do a full backup of the database, I get a backup of 7 gb.
> If I then IMMEDIATELY do a differential backup, I get a backup of 4.7
> gb.
> If I then IMMEDIATELY do a differential backup, I get a backup of 4.7
> gb.
> If I then IMMEDIATELY do a differential backup, I get a backup of 4.7
> gb.
> I was started to see a pattern. I can guarantee there are no other
> users and that I didn't change the database in between backups.
> tia,
> Steve
>|||"Not the Face" <nottheface@.gmail.com> wrote in message
news:1166552868.097104.258360@.73g2000cwn.googlegroups.com...
>I have a db that is currently about 7 gb. It is currently on a test
> instance of SQL 2005. It is not being used by anyone but me for the
> purpose of learning one or two things about 2005. Here is what I deem
> odd:
> If i do a full backup of the database, I get a backup of 7 gb.
> If I then IMMEDIATELY do a differential backup, I get a backup of 4.7
> gb.
> If I then IMMEDIATELY do a differential backup, I get a backup of 4.7
> gb.
> If I then IMMEDIATELY do a differential backup, I get a backup of 4.7
> gb.
>
How large is the transaction log?
I believe it's backing up the entire transaction log PLUS any changes in the
database.

> I was started to see a pattern. I can guarantee there are no other
> users and that I didn't change the database in between backups.
> tia,
> Steve
>|||Sorry about the lag and I appreciate the responses.
The DB is currently ~7.6 gb.
The Transaction Log is currently 5 mb.
I have tried shrinking the DB and Log files (shrinking the whole DB and
each file individually)
This is on a test system, so I have control over the database changing.
It isn't. I was literally doing the differential immediately after
the full backup.
Thanks for your help.
Steve.
Greg D. Moore (Strider) wrote:[vbcol=seagreen]
> "Not the Face" <nottheface@.gmail.com> wrote in message
> news:1166552868.097104.258360@.73g2000cwn.googlegroups.com...
> How large is the transaction log?
> I believe it's backing up the entire transaction log PLUS any changes in t
he
> database.
>|||Hi
4.7GB does seem large for a differential backup. If you ran the backups as a
script such as:
BACKUP DATABASE [AdventureWorks] TO DISK =
N'C:\Backups\AdventureworksFull.bak' WITH NOFORMAT, NOINIT, NAME =
N'AdventureWorks-Full Database Backup', SKIP, NOREWIND, NOUNLOAD, STATS = 1
0
GO
BACKUP DATABASE [AdventureWorks] TO DISK =
N'C:\Backups\AdventureworksDiff1.bak' WITH DIFFERENTIAL , NOFORMAT, NOINIT,
NAME = N'AdventureWorks-Differential Database Backup', SKIP, NOREWIND,
NOUNLOAD, STATS = 10
GO
BACKUP DATABASE [AdventureWorks] TO DISK =
N'C:\Backups\AdventureworksDiff2.bak' WITH DIFFERENTIAL , NOFORMAT, NOINIT,
NAME = N'AdventureWorks-Differential Database Backup', SKIP, NOREWIND,
NOUNLOAD, STATS = 10
GO
BACKUP DATABASE [AdventureWorks] TO DISK =
N'C:\Backups\AdventureworksDiff3.bak' WITH DIFFERENTIAL , NOFORMAT, NOINIT,
NAME = N'AdventureWorks-Differential Database Backup', SKIP, NOREWIND,
NOUNLOAD, STATS = 10
GO
Then a directory of C:\backups gives
Directory of C:\Backups
17/01/2007 15:18 <DIR> .
17/01/2007 15:18 <DIR> ..
17/01/2007 15:18 1,133,056 AdventureworksDiff1.bak
17/01/2007 15:18 1,133,056 AdventureworksDiff2.bak
17/01/2007 15:18 1,133,056 AdventureworksDiff3.bak
17/01/2007 15:18 171,002,368 AdventureworksFull.bak
4 File(s) 174,401,536 bytes
2 Dir(s) 1,241,235,456 bytes free
This shows what you would expect.
John
"Not the Face" wrote:

> Sorry about the lag and I appreciate the responses.
> The DB is currently ~7.6 gb.
> The Transaction Log is currently 5 mb.
> I have tried shrinking the DB and Log files (shrinking the whole DB and
> each file individually)
> This is on a test system, so I have control over the database changing.
> It isn't. I was literally doing the differential immediately after
> the full backup.
> Thanks for your help.
> Steve.
> Greg D. Moore (Strider) wrote:
>|||Yeah. So here was the problem.
Turns out that if you Shrink the DB and Log files after the backup, it
wants to make a really large differential backup for some reason.
Even though you have *changed* the data at all.
You just *moved* the data around a bit. You know. All of it (most of
it anyway).
Woops. I moved the Shrinks in front of the backup and now my
differential is 1,121 kb. Seems a bit more reasonable.
Thanks for the pointing of the fingers in the right direction.
Steve.
On Jan 17, 10:45 am, John Bell <jbellnewspo...@.hotmail.com> wrote:[vbcol=seagreen]
> Hi
> 4.7GB does seem large for a differential backup. If you ran the backups as
a
> script such as:
> BACKUP DATABASE [AdventureWorks] TO DISK =
> N'C:\Backups\AdventureworksFull.bak' WITH NOFORMAT, NOINIT, NAME =
> N'AdventureWorks-Full Database Backup', SKIP, NOREWIND, NOUNLOAD, STATS =
10
> GO
> BACKUP DATABASE [AdventureWorks] TO DISK =
> N'C:\Backups\AdventureworksDiff1.bak' WITH DIFFERENTIAL , NOFORMAT, NOINI
T,
> NAME = N'AdventureWorks-Differential Database Backup', SKIP, NOREWIND,
> NOUNLOAD, STATS = 10
> GO
> BACKUP DATABASE [AdventureWorks] TO DISK =
> N'C:\Backups\AdventureworksDiff2.bak' WITH DIFFERENTIAL , NOFORMAT, NOINI
T,
> NAME = N'AdventureWorks-Differential Database Backup', SKIP, NOREWIND,
> NOUNLOAD, STATS = 10
> GO
> BACKUP DATABASE [AdventureWorks] TO DISK =
> N'C:\Backups\AdventureworksDiff3.bak' WITH DIFFERENTIAL , NOFORMAT, NOINI
T,
> NAME = N'AdventureWorks-Differential Database Backup', SKIP, NOREWIND,
> NOUNLOAD, STATS = 10
> GO
> Then a directory of C:\backups gives
> Directory of C:\Backups
> 17/01/2007 15:18 <DIR> .
> 17/01/2007 15:18 <DIR> ..
> 17/01/2007 15:18 1,133,056 AdventureworksDiff1.bak
> 17/01/2007 15:18 1,133,056 AdventureworksDiff2.bak
> 17/01/2007 15:18 1,133,056 AdventureworksDiff3.bak
> 17/01/2007 15:18 171,002,368 AdventureworksFull.bak
> 4 File(s) 174,401,536 bytes
> 2 Dir(s) 1,241,235,456 bytes free
> This shows what you would expect.
> John
>
> "Not the Face" wrote:
>
>
>
>
>
>
>
>
>
>
>sql

A differential backup that seems to darn big...

I have a db that is currently about 7 gb. It is currently on a test
instance of SQL 2005. It is not being used by anyone but me for the
purpose of learning one or two things about 2005. Here is what I deem
odd:
If i do a full backup of the database, I get a backup of 7 gb.
If I then IMMEDIATELY do a differential backup, I get a backup of 4.7
gb.
If I then IMMEDIATELY do a differential backup, I get a backup of 4.7
gb.
If I then IMMEDIATELY do a differential backup, I get a backup of 4.7
gb.
I was started to see a pattern. I can guarantee there are no other
users and that I didn't change the database in between backups.
tia,
Steve
Hi
From BOL:
"A differential backup is based on the most recent, previous full backup of
the data that is included in the differential backup. A differential backup
captures only the data that has changed since that full backup. This is known
as the base of the differential. A differential backup includes only the data
that have changed since the differential base. "
If you don't do another full backup between the differential backups they
will only get bigger if someone changes the data, and will stay the same if
they don't.
You don't say how old your base is, but the size of the differential backups
indicates a significant amount of changes. Have you re-indexes or shrunk the
files since the full backup?
John
"Not the Face" wrote:

> I have a db that is currently about 7 gb. It is currently on a test
> instance of SQL 2005. It is not being used by anyone but me for the
> purpose of learning one or two things about 2005. Here is what I deem
> odd:
> If i do a full backup of the database, I get a backup of 7 gb.
> If I then IMMEDIATELY do a differential backup, I get a backup of 4.7
> gb.
> If I then IMMEDIATELY do a differential backup, I get a backup of 4.7
> gb.
> If I then IMMEDIATELY do a differential backup, I get a backup of 4.7
> gb.
> I was started to see a pattern. I can guarantee there are no other
> users and that I didn't change the database in between backups.
> tia,
> Steve
>
|||"Not the Face" <nottheface@.gmail.com> wrote in message
news:1166552868.097104.258360@.73g2000cwn.googlegro ups.com...
>I have a db that is currently about 7 gb. It is currently on a test
> instance of SQL 2005. It is not being used by anyone but me for the
> purpose of learning one or two things about 2005. Here is what I deem
> odd:
> If i do a full backup of the database, I get a backup of 7 gb.
> If I then IMMEDIATELY do a differential backup, I get a backup of 4.7
> gb.
> If I then IMMEDIATELY do a differential backup, I get a backup of 4.7
> gb.
> If I then IMMEDIATELY do a differential backup, I get a backup of 4.7
> gb.
>
How large is the transaction log?
I believe it's backing up the entire transaction log PLUS any changes in the
database.

> I was started to see a pattern. I can guarantee there are no other
> users and that I didn't change the database in between backups.
> tia,
> Steve
>
|||Sorry about the lag and I appreciate the responses.
The DB is currently ~7.6 gb.
The Transaction Log is currently 5 mb.
I have tried shrinking the DB and Log files (shrinking the whole DB and
each file individually)
This is on a test system, so I have control over the database changing.
It isn't. I was literally doing the differential immediately after
the full backup.
Thanks for your help.
Steve.
Greg D. Moore (Strider) wrote:[vbcol=seagreen]
> "Not the Face" <nottheface@.gmail.com> wrote in message
> news:1166552868.097104.258360@.73g2000cwn.googlegro ups.com...
> How large is the transaction log?
> I believe it's backing up the entire transaction log PLUS any changes in the
> database.
>
|||Hi
4.7GB does seem large for a differential backup. If you ran the backups as a
script such as:
BACKUP DATABASE [AdventureWorks] TO DISK =
N'C:\Backups\AdventureworksFull.bak' WITH NOFORMAT, NOINIT, NAME =
N'AdventureWorks-Full Database Backup', SKIP, NOREWIND, NOUNLOAD, STATS = 10
GO
BACKUP DATABASE [AdventureWorks] TO DISK =
N'C:\Backups\AdventureworksDiff1.bak' WITH DIFFERENTIAL , NOFORMAT, NOINIT,
NAME = N'AdventureWorks-Differential Database Backup', SKIP, NOREWIND,
NOUNLOAD, STATS = 10
GO
BACKUP DATABASE [AdventureWorks] TO DISK =
N'C:\Backups\AdventureworksDiff2.bak' WITH DIFFERENTIAL , NOFORMAT, NOINIT,
NAME = N'AdventureWorks-Differential Database Backup', SKIP, NOREWIND,
NOUNLOAD, STATS = 10
GO
BACKUP DATABASE [AdventureWorks] TO DISK =
N'C:\Backups\AdventureworksDiff3.bak' WITH DIFFERENTIAL , NOFORMAT, NOINIT,
NAME = N'AdventureWorks-Differential Database Backup', SKIP, NOREWIND,
NOUNLOAD, STATS = 10
GO
Then a directory of C:\backups gives
Directory of C:\Backups
17/01/2007 15:18 <DIR> .
17/01/2007 15:18 <DIR> ..
17/01/2007 15:18 1,133,056 AdventureworksDiff1.bak
17/01/2007 15:18 1,133,056 AdventureworksDiff2.bak
17/01/2007 15:18 1,133,056 AdventureworksDiff3.bak
17/01/2007 15:18 171,002,368 AdventureworksFull.bak
4 File(s) 174,401,536 bytes
2 Dir(s) 1,241,235,456 bytes free
This shows what you would expect.
John
"Not the Face" wrote:

> Sorry about the lag and I appreciate the responses.
> The DB is currently ~7.6 gb.
> The Transaction Log is currently 5 mb.
> I have tried shrinking the DB and Log files (shrinking the whole DB and
> each file individually)
> This is on a test system, so I have control over the database changing.
> It isn't. I was literally doing the differential immediately after
> the full backup.
> Thanks for your help.
> Steve.
> Greg D. Moore (Strider) wrote:
>
|||Yeah. So here was the problem.
Turns out that if you Shrink the DB and Log files after the backup, it
wants to make a really large differential backup for some reason.
Even though you have *changed* the data at all.
You just *moved* the data around a bit. You know. All of it (most of
it anyway).
Woops. I moved the Shrinks in front of the backup and now my
differential is 1,121 kb. Seems a bit more reasonable.
Thanks for the pointing of the fingers in the right direction.
Steve.
On Jan 17, 10:45 am, John Bell <jbellnewspo...@.hotmail.com> wrote:[vbcol=seagreen]
> Hi
> 4.7GB does seem large for a differential backup. If you ran the backups as a
> script such as:
> BACKUP DATABASE [AdventureWorks] TO DISK =
> N'C:\Backups\AdventureworksFull.bak' WITH NOFORMAT, NOINIT, NAME =
> N'AdventureWorks-Full Database Backup', SKIP, NOREWIND, NOUNLOAD, STATS = 10
> GO
> BACKUP DATABASE [AdventureWorks] TO DISK =
> N'C:\Backups\AdventureworksDiff1.bak' WITH DIFFERENTIAL , NOFORMAT, NOINIT,
> NAME = N'AdventureWorks-Differential Database Backup', SKIP, NOREWIND,
> NOUNLOAD, STATS = 10
> GO
> BACKUP DATABASE [AdventureWorks] TO DISK =
> N'C:\Backups\AdventureworksDiff2.bak' WITH DIFFERENTIAL , NOFORMAT, NOINIT,
> NAME = N'AdventureWorks-Differential Database Backup', SKIP, NOREWIND,
> NOUNLOAD, STATS = 10
> GO
> BACKUP DATABASE [AdventureWorks] TO DISK =
> N'C:\Backups\AdventureworksDiff3.bak' WITH DIFFERENTIAL , NOFORMAT, NOINIT,
> NAME = N'AdventureWorks-Differential Database Backup', SKIP, NOREWIND,
> NOUNLOAD, STATS = 10
> GO
> Then a directory of C:\backups gives
> Directory of C:\Backups
> 17/01/2007 15:18 <DIR> .
> 17/01/2007 15:18 <DIR> ..
> 17/01/2007 15:18 1,133,056 AdventureworksDiff1.bak
> 17/01/2007 15:18 1,133,056 AdventureworksDiff2.bak
> 17/01/2007 15:18 1,133,056 AdventureworksDiff3.bak
> 17/01/2007 15:18 171,002,368 AdventureworksFull.bak
> 4 File(s) 174,401,536 bytes
> 2 Dir(s) 1,241,235,456 bytes free
> This shows what you would expect.
> John
>
> "Not the Face" wrote:
>
>
>
>
>

A differential backup that seems to darn big...

I have a db that is currently about 7 gb. It is currently on a test
instance of SQL 2005. It is not being used by anyone but me for the
purpose of learning one or two things about 2005. Here is what I deem
odd:
If i do a full backup of the database, I get a backup of 7 gb.
If I then IMMEDIATELY do a differential backup, I get a backup of 4.7
gb.
If I then IMMEDIATELY do a differential backup, I get a backup of 4.7
gb.
If I then IMMEDIATELY do a differential backup, I get a backup of 4.7
gb.
I was started to see a pattern. I can guarantee there are no other
users and that I didn't change the database in between backups.
tia,
Steve"Not the Face" <nottheface@.gmail.com> wrote in message
news:1166552868.097104.258360@.73g2000cwn.googlegroups.com...
>I have a db that is currently about 7 gb. It is currently on a test
> instance of SQL 2005. It is not being used by anyone but me for the
> purpose of learning one or two things about 2005. Here is what I deem
> odd:
> If i do a full backup of the database, I get a backup of 7 gb.
> If I then IMMEDIATELY do a differential backup, I get a backup of 4.7
> gb.
> If I then IMMEDIATELY do a differential backup, I get a backup of 4.7
> gb.
> If I then IMMEDIATELY do a differential backup, I get a backup of 4.7
> gb.
>
How large is the transaction log?
I believe it's backing up the entire transaction log PLUS any changes in the
database.
> I was started to see a pattern. I can guarantee there are no other
> users and that I didn't change the database in between backups.
> tia,
> Steve
>|||Sorry about the lag and I appreciate the responses.
The DB is currently ~7.6 gb.
The Transaction Log is currently 5 mb.
I have tried shrinking the DB and Log files (shrinking the whole DB and
each file individually)
This is on a test system, so I have control over the database changing.
It isn't. I was literally doing the differential immediately after
the full backup.
Thanks for your help.
Steve.
Greg D. Moore (Strider) wrote:
> "Not the Face" <nottheface@.gmail.com> wrote in message
> news:1166552868.097104.258360@.73g2000cwn.googlegroups.com...
> >I have a db that is currently about 7 gb. It is currently on a test
> > instance of SQL 2005. It is not being used by anyone but me for the
> > purpose of learning one or two things about 2005. Here is what I deem
> > odd:
> >
> > If i do a full backup of the database, I get a backup of 7 gb.
> > If I then IMMEDIATELY do a differential backup, I get a backup of 4.7
> > gb.
> > If I then IMMEDIATELY do a differential backup, I get a backup of 4.7
> > gb.
> > If I then IMMEDIATELY do a differential backup, I get a backup of 4.7
> > gb.
> >
> How large is the transaction log?
> I believe it's backing up the entire transaction log PLUS any changes in the
> database.
>
> > I was started to see a pattern. I can guarantee there are no other
> > users and that I didn't change the database in between backups.
> >
> > tia,
> > Steve
> >|||Yeah. So here was the problem.
Turns out that if you Shrink the DB and Log files after the backup, it
wants to make a really large differential backup for some reason.
Even though you have *changed* the data at all.
You just *moved* the data around a bit. You know. All of it (most of
it anyway).
Woops. I moved the Shrinks in front of the backup and now my
differential is 1,121 kb. Seems a bit more reasonable.
Thanks for the pointing of the fingers in the right direction.
Steve.
On Jan 17, 10:45 am, John Bell <jbellnewspo...@.hotmail.com> wrote:
> Hi
> 4.7GB does seem large for a differential backup. If you ran the backups as a
> script such as:
> BACKUP DATABASE [AdventureWorks] TO DISK => N'C:\Backups\AdventureworksFull.bak' WITH NOFORMAT, NOINIT, NAME => N'AdventureWorks-Full Database Backup', SKIP, NOREWIND, NOUNLOAD, STATS = 10
> GO
> BACKUP DATABASE [AdventureWorks] TO DISK => N'C:\Backups\AdventureworksDiff1.bak' WITH DIFFERENTIAL , NOFORMAT, NOINIT,
> NAME = N'AdventureWorks-Differential Database Backup', SKIP, NOREWIND,
> NOUNLOAD, STATS = 10
> GO
> BACKUP DATABASE [AdventureWorks] TO DISK => N'C:\Backups\AdventureworksDiff2.bak' WITH DIFFERENTIAL , NOFORMAT, NOINIT,
> NAME = N'AdventureWorks-Differential Database Backup', SKIP, NOREWIND,
> NOUNLOAD, STATS = 10
> GO
> BACKUP DATABASE [AdventureWorks] TO DISK => N'C:\Backups\AdventureworksDiff3.bak' WITH DIFFERENTIAL , NOFORMAT, NOINIT,
> NAME = N'AdventureWorks-Differential Database Backup', SKIP, NOREWIND,
> NOUNLOAD, STATS = 10
> GO
> Then a directory of C:\backups gives
> Directory of C:\Backups
> 17/01/2007 15:18 <DIR> .
> 17/01/2007 15:18 <DIR> ..
> 17/01/2007 15:18 1,133,056 AdventureworksDiff1.bak
> 17/01/2007 15:18 1,133,056 AdventureworksDiff2.bak
> 17/01/2007 15:18 1,133,056 AdventureworksDiff3.bak
> 17/01/2007 15:18 171,002,368 AdventureworksFull.bak
> 4 File(s) 174,401,536 bytes
> 2 Dir(s) 1,241,235,456 bytes free
> This shows what you would expect.
> John
>
> "Not the Face" wrote:
> > Sorry about the lag and I appreciate the responses.
> > The DB is currently ~7.6 gb.
> > The Transaction Log is currently 5 mb.
> > I have tried shrinking the DB and Log files (shrinking the whole DB and
> > each file individually)
> > This is on a test system, so I have control over the database changing.
> > It isn't. I was literally doing the differential immediately after
> > the full backup.
> > Thanks for your help.
> > Steve.
> > Greg D. Moore (Strider) wrote:
> > > "Not the Face" <notthef...@.gmail.com> wrote in message
> > >news:1166552868.097104.258360@.73g2000cwn.googlegroups.com...
> > > >I have a db that is currently about 7 gb. It is currently on a test
> > > > instance of SQL 2005. It is not being used by anyone but me for the
> > > > purpose of learning one or two things about 2005. Here is what I deem
> > > > odd:
> > > > If i do a full backup of the database, I get a backup of 7 gb.
> > > > If I then IMMEDIATELY do a differential backup, I get a backup of 4.7
> > > > gb.
> > > > If I then IMMEDIATELY do a differential backup, I get a backup of 4.7
> > > > gb.
> > > > If I then IMMEDIATELY do a differential backup, I get a backup of 4.7
> > > > gb.
> > > How large is the transaction log?
> > > I believe it's backing up the entire transaction log PLUS any changes in the
> > > database.
> > > > I was started to see a pattern. I can guarantee there are no other
> > > > users and that I didn't change the database in between backups.
> > > > tia,
> > > > Steve- Hide quoted text -- Show quoted text -

A different question about SQL Server Access Denied

I have 2 development servers, both of which I need to use to run an ASP.NET app. The app connects to an external SQL Server with SQL Server authentication only. One of my servers connects properly, but the same exact app running on the second server generates the dreaded SQL Server Access Denied message. A traditional .asp file running on the second server does connect successfully to the database, so I have deduced that the problem is related either to the ASPNET account on the second server, or else to the structure of the server itself or IIS.

I have verified on both machines that IIS uses the ASPNET account for anonymous access. The only difference I can find is that the working server is using NTFS (and the VS_Developers account has full permission on my application's directory), while the problem server is using FAT. Does anyone know if it the FAT file system could be my problem? If so, should I convert to NTFS, or is there another solution? What else could I look at on the problem server? Any help would be greatly appreciated. Thanks.If you are using SQL Server authentication then using IIS anonymous accounts and ASP.Net accounts are not involved in the authentication. What is the exact message that appears? Login failed for user [...] ?

If you are using SQL server with SQL server authentication then in the connection string you should be providing the UID and Password of a SQL Server Login and there should be no "integrated security" clause in the connection string.|||Thanks for the reply. The error message is: "SQL Server does not exist or access is denied". There is no login failure error occurring.

Here is my connection string:
"Data Source=mySQLIPhere,1433;Network Library=DBMSSOCN;Initial Catalog=myDBName;User ID=mySQLUser;Password=mySQLPw;"

As I mentioned, this connection string works fine from one of the 2 development servers as well as from the production web server, so I don't think the connection string is the problem.|||Seems ok to me; if you are willing to do some experiments try these in order.

Verify that you can access mySQLIPhere machine from the machine causing the problem. e.g. by trying to open \\mySQLIPhere or by pinging.

Use query analyzer to connect to SQL Server from the machine causing problem. In "connect to SQL Server" dialog box provide mySQLIPhere as Sql Server name, select Sql Sever Auth, loginname and password. If this works you may experiment rewriting your connection string bit by bit e.g.:
"Data Source=mySQLIPhere; User ID=mySQLUser; Password=mySQLPw;"
"Data Source=mySQLIPhere,1433; User ID=mySQLUser; Password=mySQLPw;"
"Data Source=mySQLIPhere;Initial Catalog=myDBName; User ID=mySQLUser; Password=mySQLPw;"

It is most probably the SQL Server not exist (not found) part that looks true, rather than the access denied part.

A Different Login failed for user sa problem

Hello,

I know there are already topics on the "Login failed for user 'sa'. Reason: Not associated with a trusted SQL Server connection" problem but I think this one might be different.

I am working with 2 servers, both running Windows Server 2003 with 1 running SQL Server 2000 and the other acting as a Web Server, and a PC running Windows XP Pro as a development machine - all of which are on the same domain.

I am developing a couple of ASP.NET web applications, and this is where the problem occurs. If I try to connect to the SQL database with the web application running on the development PC I get the "Login failed for user 'sa'. Reason: Not associated with a trusted SQL Server connection" error message.

This is where it gets interesting. If I take a copy of the code that is running on the development PC and run it on the Web Server, it is able to connect to the SQL database without any problems.

The Authentication mode is set to SQL Server and Windows. I have tried several different things with our Techs and we get the same problem.

Any ideas or suggestions would be greatly appreciated as we are totally stumped.

Thanks,
AlHowdy

Never, ever, ever, ever has NT authentication on SQL been bullet proof...then mix that with multiple versions of VB & SQL & W2K & W2K3 and....aw man...game over.....

Worth trying using plain SQL authentication ( if thats acceptible in your environment ) and see how that goes - at least it will get your app working initially. Then work on the NT authentication bit.

You may wind up having to wait for the next SP for everything if its a bug.

Cheers,

SG.|||All I am using at the moment is SQL authentication and I have absolutely no interest in using NT or Windows Authentication.

Thanks anyway,
Al|||Howdy,

Sounds like connection thinks it is using a trusted connection...can you force it to be sure its using straight SQL authentication? I have seen similar errors with sql configured apps trying to use ODBC connections configured fro NT authentication.

HTH

Cheers,

SG|||Go to the enterprise manager, right click on the server, in the properties, select security tab and select the first option(SQL Server and Windows NT) in the Authentication.

Now try connecting from your application.

A Different Kind of MSSQL Question...

Hey all!
This may sound like a weird one but i couldn't think of a better audience to ask!

If you are a SQL Server DBA (or were, or aspire to be, or play one on T.V., etc.) and the CIO of your 2.something billion $ a year company has offered you an 1 hour forum to sit down and ask him anything you want with the pretense of getting a straight answer, what would you ask?

I know this offers an excellent opportunity to cut up a bit...and do so if you must (keeping us entertained around here is important too!)...but i'm looking for work related, SQL related, direction, strategy, etc, etc type stuff.
Not: 'Whats it like to drive a $250k 'benz to work everyday?' OR 'Can i have .5% of your $2,000,000 bonus this year?'

While i genuinely have those questions in my mind...i'm not looking to waste his $5,000 an hour ass's time - and I only have 1 hour to chew his ear off.

What do ya'll think?...not knowing the type of business you're involved w/ ...I guess I would take the slant of trying to find out how secure the IT division is within the company...job security these days is fleeting, I've been fortunate only 3 co.'s ni 25 years...

...on the lighter side...I'd want to know what he's investing in!!!|||Depends on what you want. A raise, promotion, job security, new career, to marry his daughter, to make an impression, or to be forgotten. What is the industry? Is he technical? Find how how to educate him about your field and it's importance. Ask him what his vision for the company is and where it will be in 5 or 15 years. How long does he plan on being at the helm? What would he do differently if he had to build the company again (assuming he did build it). Ask him who is competitors are and what about them he fears. Ask how you and your department can help him and your company be more competitive. Find out what his interests and hobbies are, what his family is like, where he grew up, make small talk about his home town or college. Ask him what the most difficult part of running a 2 billion$ company is. What is the most enjoyable part? Ask him why he does it - gets up every day and comes to work instead of retiring. Ask to take him to lunch. Do you want to run a 2 billion$ company some day?|||The CIO? As in not the CEO? I would probably want to see what the overall strategy for the department is What are the big projects coming down the line that always seem to take the line support folks by surprise.

Planning on consolidating all of the SQL Servers in the company onto one big Unisys machine?
Declaring parts of the SQL Server landscape to be High Availability?
Designing a Disaster Recovery (or is that Business Continuity, now?) site? Maybe develop plans for one?
What is the strategy for growing the department, or hiring strategy? Just hire College grads who may be eager, but are not very experienced, or hire veterans? One way, you are in an endless training loop. The other way may limit your career growth opportunities, if too many people get hired above you.
Then of course, what are the training options? Will people get transferred to other teams regularly? Get trained regularly?
Which brings us to strategies for new technologies. Wait for stuff to get de-supported before upgrading, or use all beta software?

There are a lot of questions to choose from, but I think the most important one would be: "Do you read DBForums?"|||"Do you read DBForums?"
HAH! If so...i hope he at least appreciates my interest in getting something out of this opportunity! (Though i'd wonder who was steering the ship if he's knocking off around here!)

Depends on what you want.
Looking for some glimpse into the motivations behind the man and/or the title. I believe in order to come to some understanding about someone you must first understand what moves him (beyond the basic electomagnetism technically moving everthing).

How long does he plan on being at the helm?
oooo...good one.

type of business you're involved w/
Where else can you generate a 2+ billion $ a year revenue stream these days...drug dealing, of course...uh...i mean Healthcare.


Thanks all!

Cheers,
Oddsql

a different identiy columns in replication question

sql2k sp3
Ive got a little bit of a Replication background but never
with "immediate updating with queued updating for
failover" like Im testing now. In fact Ive never even done
just "immediate updating". I seen lots of horror stories
here about identity columns causing replication problems
for people and was expecting to get them during my testing
this week but I havent. Im curious as to why and thought
I'd ask. Heres what I've done in testing:
1; Made one big Publication of all my tables.
2; Did a Backup/ Restore to the Subscriber.
3; Took the actions as outlined in KB 320499.
4; Took the actions as outlined in KB 320773.
Everything is up and running at this point. Replication
runs fine in both directions. Identity columns on both the
Pub and Sub are in place. I am NOT using the "Yes(Not for
Replication)" option on either box nor have I modified the
ranges on either box. This is why I thought I'd have
problems. I thought Id need to place different ranges on
them and use the "Not for Replication" option on them. But
I didn't and am having no problems. Why? Not that Im
complaining. I even did a failover test by turinng off the
Publisher and switching to Queued Updating. I did inserts
while it was in that mode and still had no problems.
Again Im not upset by my success. But just dont get why
others have the problems Ive read about and I dont? There
is something about my settings that is correct I am
curious to find out what it is.
TIA, ChrisR
Chris,
the errors people have reported come from a variety of causes. Often it is
incorrect range management - either manually or on behalf of SQL Server.
Sometimes the problems have been in using the standby server when the
internal identity value of a column hasn't been updated. In some cases
upating the identity value is not possible even through DBCC CHECKIDENT.
In your scenario as I understand it, there is no allowance for the publisher
and subscriber being allocated the same identity value. This may not be a
problem for you as yet, but if someone on the subscriber attempts to insert
a record and network connectivity is temporarily down, it'll go into the
queue and when the queue reader starts, there could be conflicts. To avoid
this you can have SQL Server allocate an identity range for you or you can
manually create the range. The latter is quite straightforward if you have
realtively few subscribers. EG if you had one subscriber, the publisher
could have a seed of 1 and increment of 2 (odd nos), while the subscriber
has a seed of 2 and increment of 2 (even nos).
HTH,
Paul Ibison
|||This may not be a
>problem for you as yet, but if someone on the subscriber
attempts to insert
>a record and network connectivity is temporarily down,
it'll go into the
>queue and when the queue reader starts, there could be
conflicts.
This is the scenario I did in testing. Turned off the
Publisher, inserted into the Subscriber. When the Pub was
back up there was no issues. You are saying there could be
conflict as I beleive you. Do you know what the
circumstances are that would amke this happen?
Thanks

>--Original Message--
>Chris,
>the errors people have reported come from a variety of
causes. Often it is
>incorrect range management - either manually or on behalf
of SQL Server.
>Sometimes the problems have been in using the standby
server when the
>internal identity value of a column hasn't been updated.
In some cases
>upating the identity value is not possible even through
DBCC CHECKIDENT.
>In your scenario as I understand it, there is no
allowance for the publisher
>and subscriber being allocated the same identity value.
This may not be a
>problem for you as yet, but if someone on the subscriber
attempts to insert
>a record and network connectivity is temporarily down,
it'll go into the
>queue and when the queue reader starts, there could be
conflicts. To avoid
>this you can have SQL Server allocate an identity range
for you or you can
>manually create the range. The latter is quite
straightforward if you have
>realtively few subscribers. EG if you had one subscriber,
the publisher
>could have a seed of 1 and increment of 2 (odd nos),
while the subscriber
>has a seed of 2 and increment of 2 (even nos).
>HTH,
>Paul Ibison
>
>.
>
|||Chris,
to test this you can force failover, or more easily set up an alternative
test system with just a queue. Stop the queue reader agent and distribution
agent. Insert a record into the publisher and subscriber, and the 2 new
records will have the same identity value.
When starting the queue reader there will be a conflict registered which is
viewable i the conflict viewer.
This is not an error like in some of the other posts, but it is a problem of
lost data whch can be avoided by partitioning the identity range.
HTH,
Paul Ibison
|||With immediate updating you are guananteed not to have identity range
problems.
The reason is that any update that happens on the subscriber is first
applied on the publisher where the publisher's identity range rules.
The problem of course is your publisher/subscriber must be well connected
and the publisher must always be online. If so, updates on your subscriber
are rolled back. If the link between the publisher and subscribers goes
down, updates can still occur on the publisher.
With queued, when your publisher is offline, all updates happen on the
subcsriber, so again, no identity problems as no updates happen on the
publisher.
As queued is an asynchronous process when the publisher comes back on line
unless you revert back to immediate you can have identity range problems
unless you are using automatic identity range management.
Automatic Identity Range Management is basically trouble free. You run into
problems with it when you have a range of lets say 100, and a batch update
that updates 1000 rows (or really anything over the 100 range). The procs
which do the automatic range management don't have time to work during the
batch and you get the problem.
So pick a range which is large. Many dba's pick very large ranges which they
know will not be blown for the lifetime of their replication solution. This
option is called set it and forget it. It works very well.
Hilary Cotter
Looking for a book on SQL Server replication?
http://www.nwsu.com/0974973602.html
"ChrisR" <anonymous@.discussions.microsoft.com> wrote in message
news:2ef4e01c46b7b$643c0120$a301280a@.phx.gbl...
> sql2k sp3
> Ive got a little bit of a Replication background but never
> with "immediate updating with queued updating for
> failover" like Im testing now. In fact Ive never even done
> just "immediate updating". I seen lots of horror stories
> here about identity columns causing replication problems
> for people and was expecting to get them during my testing
> this week but I havent. Im curious as to why and thought
> I'd ask. Heres what I've done in testing:
> 1; Made one big Publication of all my tables.
> 2; Did a Backup/ Restore to the Subscriber.
> 3; Took the actions as outlined in KB 320499.
> 4; Took the actions as outlined in KB 320773.
> Everything is up and running at this point. Replication
> runs fine in both directions. Identity columns on both the
> Pub and Sub are in place. I am NOT using the "Yes(Not for
> Replication)" option on either box nor have I modified the
> ranges on either box. This is why I thought I'd have
> problems. I thought Id need to place different ranges on
> them and use the "Not for Replication" option on them. But
> I didn't and am having no problems. Why? Not that Im
> complaining. I even did a failover test by turinng off the
> Publisher and switching to Queued Updating. I did inserts
> while it was in that mode and still had no problems.
> Again Im not upset by my success. But just dont get why
> others have the problems Ive read about and I dont? There
> is something about my settings that is correct I am
> curious to find out what it is.
> TIA, ChrisR
|||Thanks Hillary and Paul. I just realized from reading your responses that Im
not totally positive if this box will be used just for fail over if the
Publisher goes down or not. Im not sure, but I dont think the two of them
will ever be used at the same time and the Subscriber will be written to
only if the Pub is off line. If this is the case, I don't think I will have
the identity range problem will I? Come to think of it, should I switch the
whole plan over just to Queued Updating if this is the case? Would I benifit
from that in any way?
Thanks alot you guys for your help.
ChrisR
"Hilary Cotter" <hilaryk@.att.net> wrote in message
news:#4F46$$aEHA.3420@.TK2MSFTNGP12.phx.gbl...
> With immediate updating you are guananteed not to have identity range
> problems.
> The reason is that any update that happens on the subscriber is first
> applied on the publisher where the publisher's identity range rules.
> The problem of course is your publisher/subscriber must be well connected
> and the publisher must always be online. If so, updates on your subscriber
> are rolled back. If the link between the publisher and subscribers goes
> down, updates can still occur on the publisher.
> With queued, when your publisher is offline, all updates happen on the
> subcsriber, so again, no identity problems as no updates happen on the
> publisher.
> As queued is an asynchronous process when the publisher comes back on line
> unless you revert back to immediate you can have identity range problems
> unless you are using automatic identity range management.
> Automatic Identity Range Management is basically trouble free. You run
into
> problems with it when you have a range of lets say 100, and a batch update
> that updates 1000 rows (or really anything over the 100 range). The procs
> which do the automatic range management don't have time to work during the
> batch and you get the problem.
> So pick a range which is large. Many dba's pick very large ranges which
they
> know will not be blown for the lifetime of their replication solution.
This
> option is called set it and forget it. It works very well.
>
>
> --
> Hilary Cotter
> Looking for a book on SQL Server replication?
> http://www.nwsu.com/0974973602.html
>
> "ChrisR" <anonymous@.discussions.microsoft.com> wrote in message
> news:2ef4e01c46b7b$643c0120$a301280a@.phx.gbl...
>
|||queued and bi-directional transactional replication are options. If you
expect schema changes I would use queued as opposed to bi-directional
transactional,
Queued will add a guid column to all tables you are replicating however.
Hilary Cotter
Looking for a book on SQL Server replication?
http://www.nwsu.com/0974973602.html
"ChrisR" <chris@.noemail.com> wrote in message
news:OQ4yT8BbEHA.3524@.TK2MSFTNGP12.phx.gbl...
> Thanks Hillary and Paul. I just realized from reading your responses that
Im
> not totally positive if this box will be used just for fail over if the
> Publisher goes down or not. Im not sure, but I dont think the two of them
> will ever be used at the same time and the Subscriber will be written to
> only if the Pub is off line. If this is the case, I don't think I will
have
> the identity range problem will I? Come to think of it, should I switch
the
> whole plan over just to Queued Updating if this is the case? Would I
benifit[vbcol=seagreen]
> from that in any way?
> Thanks alot you guys for your help.
> ChrisR
>
> "Hilary Cotter" <hilaryk@.att.net> wrote in message
> news:#4F46$$aEHA.3420@.TK2MSFTNGP12.phx.gbl...
connected[vbcol=seagreen]
subscriber[vbcol=seagreen]
line[vbcol=seagreen]
> into
update[vbcol=seagreen]
procs[vbcol=seagreen]
the
> they
> This
>
|||Ive done a bit of schema changes on replicated tables in the past. How does
queued benifit the cause?
"Hilary Cotter" <hilaryk@.att.net> wrote in message
news:u3h6VFHbEHA.1656@.TK2MSFTNGP09.phx.gbl...[vbcol=seagreen]
> queued and bi-directional transactional replication are options. If you
> expect schema changes I would use queued as opposed to bi-directional
> transactional,
> Queued will add a guid column to all tables you are replicating however.
> --
> Hilary Cotter
> Looking for a book on SQL Server replication?
> http://www.nwsu.com/0974973602.html
>
> "ChrisR" <chris@.noemail.com> wrote in message
> news:OQ4yT8BbEHA.3524@.TK2MSFTNGP12.phx.gbl...
that[vbcol=seagreen]
> Im
them[vbcol=seagreen]
> have
> the
> benifit
> connected
> subscriber
goes[vbcol=seagreen]
> line
problems[vbcol=seagreen]
> update
> procs
> the
which
>
|||with bi-directional transactional replication you have to drop both
publications, make changes on both sides and rebuild. You can't use
sp_repladdcolumn or sp_repldropcolumn when you are doing bi-directional
transactional replication.
You can use these stored procedures when you are using transactional
replication with queued updating subscribers.
Hilary Cotter
Looking for a book on SQL Server replication?
http://www.nwsu.com/0974973602.html
"ChrisR" <chris@.noemail.com> wrote in message
news:O$rxpQJbEHA.1732@.TK2MSFTNGP09.phx.gbl...
> Ive done a bit of schema changes on replicated tables in the past. How
does[vbcol=seagreen]
> queued benifit the cause?
>
> "Hilary Cotter" <hilaryk@.att.net> wrote in message
> news:u3h6VFHbEHA.1656@.TK2MSFTNGP09.phx.gbl...
> that
the[vbcol=seagreen]
> them
to[vbcol=seagreen]
switch[vbcol=seagreen]
range[vbcol=seagreen]
first[vbcol=seagreen]
> goes
the[vbcol=seagreen]
the[vbcol=seagreen]
on[vbcol=seagreen]
> problems
run[vbcol=seagreen]
during[vbcol=seagreen]
> which
solution.
>
|||Thanks Hilary. Ive used sp_repladdcolumn in the past in transactional repl
but it wasnt bi-directional. I assumed I could use it now as well. Good to
know. I'll find out all the requirements this weeks and will now be more
informed on which road to take. Thanks again.
CR
"Hilary Cotter" <hilaryk@.att.net> wrote in message
news:e$1Ix$LbEHA.2216@.TK2MSFTNGP10.phx.gbl...[vbcol=seagreen]
> with bi-directional transactional replication you have to drop both
> publications, make changes on both sides and rebuild. You can't use
> sp_repladdcolumn or sp_repldropcolumn when you are doing bi-directional
> transactional replication.
> You can use these stored procedures when you are using transactional
> replication with queued updating subscribers.
> --
> Hilary Cotter
> Looking for a book on SQL Server replication?
> http://www.nwsu.com/0974973602.html
>
> "ChrisR" <chris@.noemail.com> wrote in message
> news:O$rxpQJbEHA.1732@.TK2MSFTNGP09.phx.gbl...
> does
you[vbcol=seagreen]
however.[vbcol=seagreen]
> the
written[vbcol=seagreen]
> to
will[vbcol=seagreen]
> switch
> range
> first
rules.[vbcol=seagreen]
> the
> the
> on
> run
batch[vbcol=seagreen]
The
> during
> solution.
>

A DB with the same name om two nodes

I'm not completly sure how to phrase this question.
In the 2 node cluster I'm working with, I noticed that there is a database
named "Utility" on both nodes of the cluster, which confuses me. I thought an
instance was clustered, not a database, so finding the duplicate DB doesn't
make sense.
Now I understand that master, model, msdb & tempdb will be duplicated across
nodes, but these are system DBs and (I would have assumed) special. The
Utility DB is serving the function for local stuff similar to a combination
of msdb & master, but for local admin applications.
I was of the impression that an instance was clustered, not a database. So,
I don't understand how the duplicate could (or should) be there.
OK, it is clear that you "can", but what happens during a failover? It would
seem that the copy on the failed instance would be unavailable and could
cause issues.
Also, does that mean that you can have a db on a clustered instance that is
not available after a failover?
"Edwin vMierlo [MVP]" wrote:

> "JayKon" <JayKon@.discussions.microsoft.com> wrote in message
> news:9467D8C2-0970-485E-9ADF-D3DA2E4E314A@.microsoft.com...
> an
> doesn't
> across
> combination
> So,
> Yes you can,
> so "Instance1" has a database called "data1" in a cluster group called
> "Group1"
> and "Instance2" has a database called "data1" in a cluster group called
> "Group2"
> Maybe I misunderstand your post, but I do not see a problem
> rgds,
> Edwin.
>
>
|||Two different instances on the same cluster act completely independently of
each other, so the database names can all be the same or different.
No different really than if you installed two instances on a stand alone and
created a database called "myDB" on each, other than the fact that in your
cluster, each instance has its own drive resources.
Kevin Hill
3NF Consulting
http://www.3nf-inc.com/NewsGroups.htm
Real-world stuff I run across with SQL Server:
http://kevin3nf.blogspot.com
"JayKon" <JayKon@.discussions.microsoft.com> wrote in message
news:12B664B7-DF3F-4B5D-8B0F-4AA9F1B2B4E3@.microsoft.com...[vbcol=seagreen]
> OK, it is clear that you "can", but what happens during a failover? It
> would
> seem that the copy on the failed instance would be unavailable and could
> cause issues.
> Also, does that mean that you can have a db on a clustered instance that
> is
> not available after a failover?
> "Edwin vMierlo [MVP]" wrote:
|||How many instances do you have installed?
There should only be one set of system databases per instance. And, they
should be located on a "shared" physical disk. Only 1 node should have
ownership of this disk at a time; therefore, you shouldn't have multiple
copies unless you had multiple instances.
Sincerely,
Anthony Thomas

"JayKon" <JayKon@.discussions.microsoft.com> wrote in message
news:9467D8C2-0970-485E-9ADF-D3DA2E4E314A@.microsoft.com...
> I'm not completly sure how to phrase this question.
> In the 2 node cluster I'm working with, I noticed that there is a database
> named "Utility" on both nodes of the cluster, which confuses me. I thought
an
> instance was clustered, not a database, so finding the duplicate DB
doesn't
> make sense.
> Now I understand that master, model, msdb & tempdb will be duplicated
across
> nodes, but these are system DBs and (I would have assumed) special. The
> Utility DB is serving the function for local stuff similar to a
combination
> of msdb & master, but for local admin applications.
> I was of the impression that an instance was clustered, not a database.
So,
> I don't understand how the duplicate could (or should) be there.
|||It's less of a concern, than a comprension issue.
So, if I have one database per instance (what I would have expected) during
a failover, that DB will be seen by the other instance. However, if I have
the same database name on both instances, then during a failover, the local
copy will be the only visable one?
Like I sad, a compresion issue.
"Edwin vMierlo [MVP]" wrote:

> As Kevin already mentioned, after failover, there should be no difference,
> other than the two instances are online on the same physical node.
> An instance "lives" in a cluster group. The cluster group "acts" like an
> completely independent server, with it own Network Name, Ip address, disks,
> databases.
> Hope this helps to take your concerns away,
> Best Regards,
> Edwin.
> MVP - Windows Server - Clustering
>
>
> "JayKon" <JayKon@.discussions.microsoft.com> wrote in message
> news:12B664B7-DF3F-4B5D-8B0F-4AA9F1B2B4E3@.microsoft.com...
> would
> is
> database
> thought
> The
> database.
>
>
|||SQL Server instances failover, not databases. It is no difference than
running multiple instances on a stand-alone server, except instances in a
clustered configuration also differ by virtual server network name, but they
are still independent, ISOLATED binaries and databases. Nothing is "shared"
between the instances except for the cluster nodes that can potentially host
the resources.
The databases failover from one node to the other because the SQL Server
instance, network name, IP address, and disk change ownership between the
nodes. When SQL Server starts on the new host, it recovers the databases
just as if you had just restarted the services.
Sincerely,
Anthony Thomas

"JayKon" <JayKon@.discussions.microsoft.com> wrote in message
news:5EDABA97-5CC3-42BF-9EEA-A051C2A92FC7@.microsoft.com...
> It's less of a concern, than a comprension issue.
> So, if I have one database per instance (what I would have expected)
during
> a failover, that DB will be seen by the other instance. However, if I have
> the same database name on both instances, then during a failover, the
local[vbcol=seagreen]
> copy will be the only visable one?
> Like I sad, a compresion issue.
>
> "Edwin vMierlo [MVP]" wrote:
difference,[vbcol=seagreen]
disks,[vbcol=seagreen]
could[vbcol=seagreen]
that[vbcol=seagreen]
DB[vbcol=seagreen]
duplicated[vbcol=seagreen]
special.[vbcol=seagreen]
called[vbcol=seagreen]
called[vbcol=seagreen]

a DB connection Problem

Hello, I am developping a web application in which I have used SQL Server 2005 (installed in the same comuter ). Now my objective is to migrate the solution to other computer. So I have moved the project to the new Post.. By this way, the connection between the solution and the DB become distant . For this reason when I run this solution the error below is raising

Login failed for user ''. The user is not associated with a trusted SQL Server connection.

Description:An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.

Exception Details:System.Data.SqlClient.SqlException: Login failed for user ''. The user is not associated with a trusted SQL Server connection.

Source Error:

An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below.


Stack Trace:

[SqlException (0x80131904): Login failed for user ''. The user is not associated with a trusted SQL Server connection.] System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) +734963 System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) +188 System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) +1838 System.Data.SqlClient.SqlInternalConnectionTds.CompleteLogin(Boolean enlistOK) +33 System.Data.SqlClient.SqlInternalConnectionTds.OpenLoginEnlist(SqlConnection owningObject, SqlConnectionString connectionOptions, String newPassword, Boolean redirectedUserInstance) +628 System.Data.SqlClient.SqlInternalConnectionTds..ctor(DbConnectionPoolIdentity identity, SqlConnectionString connectionOptions, Object providerInfo, String newPassword, SqlConnection owningObject, Boolean redirectedUserInstance) +170 System.Data.SqlClient.SqlConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningConnection) +359 System.Data.ProviderBase.DbConnectionFactory.CreatePooledConnection(DbConnection owningConnection, DbConnectionPool pool, DbConnectionOptions options) +28 System.Data.ProviderBase.DbConnectionPool.CreateObject(DbConnection owningObject) +424 System.Data.ProviderBase.DbConnectionPool.UserCreateRequest(DbConnection owningObject) +66 System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject) +496 System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) +82 System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) +105 System.Data.SqlClient.SqlConnection.Open() +111 System.Web.DataAccess.SqlConnectionHolder.Open(HttpContext context, Boolean revertImpersonate) +84 System.Web.DataAccess.SqlConnectionHelper.GetConnection(String connectionString, Boolean revertImpersonation) +197 System.Web.Security.SqlMembershipProvider.GetPasswordWithFormat(String username, Boolean updateLastLoginActivityDate, Int32& status, String& password, Int32& passwordFormat, String& passwordSalt, Int32& failedPasswordAttemptCount, Int32& failedPasswordAnswerAttemptCount, Boolean& isApproved, DateTime& lastLoginDate, DateTime& lastActivityDate) +1121 System.Web.Security.SqlMembershipProvider.CheckPassword(String username, String password, Boolean updateLastLoginActivityDate, Boolean failIfNotApproved, String& salt, Int32& passwordFormat) +105 System.Web.Security.SqlMembershipProvider.CheckPassword(String username, String password, Boolean updateLastLoginActivityDate, Boolean failIfNotApproved) +42 System.Web.Security.SqlMembershipProvider.ValidateUser(String username, String password) +83 System.Web.UI.WebControls.Login.OnAuthenticate(AuthenticateEventArgs e) +160 System.Web.UI.WebControls.Login.AttemptLogin() +105 System.Web.UI.WebControls.Login.OnBubbleEvent(Object source, EventArgs e) +99 System.Web.UI.Control.RaiseBubbleEvent(Object source, EventArgs args) +35 System.Web.UI.WebControls.Button.OnCommand(CommandEventArgs e) +115 System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) +163 System.Web.UI.WebControls.Button.System.Web.UI.IPostBackEventHandler.RaisePostBackEvent(String eventArgument) +7 System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument) +11 System.Web.UI.Page.RaisePostBackEvent(NameValueCollection postData) +33 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +5102

I understand from this that the connection with SQL Server 2005 was failed. I know also that I must change something in the connection String But I don' know it . So how doing this?

Thanks in advance

What's your connection string? I guess you're using Windows Authentication to connect to SQL. Make sure the there is a login for the account used in your connection string in the SQL Server. To manage SQL logins, connect to the SQL Server in Management Studio->go to Security->Logins|||

Hello , My connection strin is as below

<

addname="LocalSqlServer"connectionString="Data Source=IBM7;Initial Catalog=Reclamation;Integrated Security=True"providerName="System.Data.SqlClient" />

According to the message Error I have created a new login named IBM7/ASPNET and a new schema named IBM7/ASPNET owned by the new login IBM7/ASPNET . The error message become as follow:

Server Error in '/Rec_Web' Application.

Procédure stockée 'dbo.aspnet_CheckSchemaVersion' introuvable.

Description:An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.

Exception Details:System.Data.SqlClient.SqlException: Procédure stockée 'dbo.aspnet_CheckSchemaVersion' introuvable.

Source Error:

An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below.


Stack Trace:

[SqlException (0x80131904): Procédure stockée 'dbo.aspnet_CheckSchemaVersion' introuvable.] System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) +857338 System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) +734950 System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) +188 System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) +1838 System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString) +149 System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async) +886 System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result) +132 System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe) +415 System.Data.SqlClient.SqlCommand.ExecuteNonQuery() +135 System.Web.Util.SecUtility.CheckSchemaVersion(ProviderBase provider, SqlConnection connection, String[] features, String version, Int32& schemaVersionCheck) +367 System.Web.Security.SqlMembershipProvider.CheckSchemaVersion(SqlConnection connection) +85 System.Web.Security.SqlMembershipProvider.GetPasswordWithFormat(String username, Boolean updateLastLoginActivityDate, Int32& status, String& password, Int32& passwordFormat, String& passwordSalt, Int32& failedPasswordAttemptCount, Int32& failedPasswordAnswerAttemptCount, Boolean& isApproved, DateTime& lastLoginDate, DateTime& lastActivityDate) +1121 System.Web.Security.SqlMembershipProvider.CheckPassword(String username, String password, Boolean updateLastLoginActivityDate, Boolean failIfNotApproved, String& salt, Int32& passwordFormat) +105 System.Web.Security.SqlMembershipProvider.CheckPassword(String username, String password, Boolean updateLastLoginActivityDate, Boolean failIfNotApproved) +42 System.Web.Security.SqlMembershipProvider.ValidateUser(String username, String password) +83 System.Web.UI.WebControls.Login.OnAuthenticate(AuthenticateEventArgs e) +160 System.Web.UI.WebControls.Login.AttemptLogin() +105 System.Web.UI.WebControls.Login.OnBubbleEvent(Object source, EventArgs e) +99 System.Web.UI.Control.RaiseBubbleEvent(Object source, EventArgs args) +35 System.Web.UI.WebControls.Button.OnCommand(CommandEventArgs e) +115 System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) +163 System.Web.UI.WebControls.Button.System.Web.UI.IPostBackEventHandler.RaisePostBackEvent(String eventArgument) +7 System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument) +11 System.Web.UI.Page.RaisePostBackEvent(NameValueCollection postData) +33 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +5102

So what should I do ? Thanks in advance

|||

Sorry I can't understand the error message very well. I guess it indicates a permission issue on the'dbo.aspnet_CheckSchemaVersion' .So please make sure you're connecting to the right database which contains the'dbo.aspnet_CheckSchemaVersion', and the IBM7/ASPNET account has proper permission to the database objects. Suppose for the login IBM7/ASPNET, you map it to amyuser user in the database:

grant execute on dbo.aspnet_CheckSchemaVersion tomyuser

Consider the IBM7/ASPNET account may use other database objects, you can add it to the db_owner database role in the database.

|||

Thanks for this indication, I think that I understand the problem : The database to which I want connect is imported from another DB who use the ASP membership feature... but it seems during the DB copy the stored procedures related to the membership weren't copied. For this reason, this procedure wasn't founded. My question now , Is there a manner to copy only the non system stored procedures ?

Thanks in advance