A New Internet Library: Add Your Website/Blog or Suggest A Website/Blog to our Free Web Directory http://anil.myfunda.net.

Its very simple, free and SEO Friendly.
Submit Now....

Phil was contentedly re-reading Exodus, when it suddenly struck him that Moses didn't have the sort of difficulty that the Book editors in the IT industry generally have with their technical authors nowadays:

Source Click Here.

Microsoft have quietly been improving full-text indexing in SQL Server. It is time to take a good look at what it offer. Who better to give us that look than Robert Sheldon, in the first of a series.

Source Click Here.

With Performance Data Collector in SQL Server 2008, you can now store performance data from a number of SQL Servers in one central location. This data is collected by a collection set on each server and stored in a shareable management data warehouse (MDW). Reports can be generated from this data using the built-in reports or generating your own with reporting Services. Brad McGehee explains more.

Source Click Here.

Session submission is now open for SQLBits IV.

 

It will be in Manchester on the Saturday 28th March 2009. 

 

We are planning on having more sessions than before so if you submit a session you will have more chance of being chosen.

 

To submit a session go to http://www.sqlbits.com/information/SessionSubmission.aspx ( you will need to register on the site to be able to submit a session)



-

Source Click Here.

Talk about media hype and anti-MS sentiment, John Leyden writes an article in todays register MS (finally) confirms unpatched SQL Server flaw -> http://www.theregister.co.uk/2008/12/23/sql_server_0day_latest/ . At the end of the day, in order to exploit the problem you need to either a) give the hacker access to your SQL Server to login or b) have so badly written your own application that it is subject to SQL injection attact. In the real world neither of these should be possible if people have...(read more)

Source Click Here.

You know that NULL values on joined columns are omitted from comparision when tables are joined Consider this example Declare @t1 table ( col1 int , col2 varchar ( 10 )) insert into @t1 select 1 , 'test1' union all select 2 , 'test2' union all select NULL, 'test3' union all select 5 , ' test4' Declare @t2 table ( col1 int , col2 varchar ( 10 )) insert into @t2 select 1 , 'testing1' union all select 2 , 'testing2' union all select NULL, 'testing3'...(read more)

Source Click Here.

The next SQLBits will be taking place in Manchester (in the UK) on March 28th 2009, and if you're interested in submitting a session you can now do so here:
http://www.sqlbits.com/information/SessionSubmission.aspx

Remember, we're always looking for new blood so don't be nervous and have a go at speaking! We're also looking for sponsors too:
http://www.sqlbits.com/information/Sponsorship.aspx
It's the ideal venue to promote your SQL Server-related product or service to several hundred serious SQL Server users.

Hope to see you there!

Source Click Here.

There's been a lot of positive press for LINQ, such as the article about LINQ by Mike Otey at http://www.sqlmag.com/Article/ArticleID/48759/sql_server_48759.html.  You can also find lots of glowing reviews and info about LINQ by Troy Magennis at http://blog.aspiring-technology.com/.

 

I've been trying to figure out exactly how I feel about LINQ, along with several other developer-oriented technologies that Microsoft has launched over the years, such as CLR.  Ambivalence is the emotion that bubbles to the top most frequently.  It's pretty obvious to me that Transact-SQL is the red-headed step child within Microsoft's overall ranking of languages. 

 

One of the big problems I have with Microsoft's approach is that it's too tactical.  Every 2-3 years, Microsoft launches TNBT ("the next big thing").  TNBT will make our code better, our developers faster, our applications more efficient, walk your dog, wash your cat, tie your shoelaces, end world hunger, and otherwise make everything better.  The only problem is that TNBT is usually put together by one subteam of a major group, such as the languages group inside of the SQL Server development team.  These small teams, while brilliant, often don't get the top-down support to institute a major sweeping change to how things work.  Consequently, we get a feature set that, while useful, doesn't give us everything we need to sweep out the old and introduce the new.

 

This sort of 50% solution manifests itself in a lot of different ways, usually by making some aspects of the development process better and other aspects worse.   My friend and fellow SQL Server MVP Andrew Kelly has an interesting blog post and subsequent comments at http://sqlblog.com/blogs/andrew_kelly/archive/2007/09/06/linq-to-the-rescue.aspx which strongly illustrates this idea.  Basically, (most) developers still don't understand the basics of building an efficient database solution.  The thing that most improves database applications is proper planning and design.  But tools like LINQ and the entity framework most obviously help developers speed up development process, in effect encouraging even less planning and design than ever before.  A recipe for disaster?  Almost certainly.

 

I hate to think that the language that I'm best at is the most likely to lose overall support.  On the other hand, I'd love to get in on TNBT ("the next big thing") while it's still in its genesis.  However, figuring out what, exactly, they are planning to replace Transact-SQL with is yet to be seen since each new offering is, IMO, sadly lacking in value. 

 

As an administrator, I'm putting a lot of eggs into the PowerShell basket - the first major scripting/programming language in many years that I'm taking the time to get really good at.  However, I still don't think we've hit TNBT in development languages and we probably won't until Microsoft takes the time to convene a high-powered team composed of members from both the SQL Server and Visual Studio organizations.

 




Source Click Here.

As promised, upon return from a trip to the frozen tundra of Northern Ontario, I am posting the latest updates from last week's release of SQL Server 2005 Service Pack 3.  For those who have been waiting to test the service pack until the post-SP3 cumulative update is available, you can get it in KB #959195 and read more about it over at the SQL Server Release Services blog.  I apologize for being a few days behind, as the release was pushed out on Saturday, but I had very little access to anything during my trip.

For those of you keeping track, SP3 would bump your SQL Server instance to build # 9.00.4035, while the CU#1 update will bump it all the way up to 9.00.4207.

Also, if you are not ready to migrate to Service Pack 3, there is a security bulletin you should be aware of, surrounding an exploit using the extended stored procedure sp_replwritetovarbin.  You can read about the issue in Microsoft Security Advisory (961040) and get some more background information at the MSRC and SVRD blogs.  Note that SQL Server 2005 instances patched with SP3 or later are not affected by this vulnerability, nor is any RTM+ build of SQL Server 2008.




Source Click Here.

Kevin Kline recently posted, wondering about the directions for LINQ.

When people refer to LINQ, they're often referring to "LINQ to SQL" instead.

LINQ as a language enhancement is pretty cool. It provides a way to easy navigate enumerable objects. The only thing that puzzles me is why they picked SQL-like syntax for this instead of something more appropriate for objects.

On the other hand, LINQ to SQL I can't get excited about. The main issues are that it's tied to SQL Server and only gives you a one-to-one mapping of tables to objects. That leaves you with either a lousy database design or a lousy object model.

The easiest example is anything that involves many to many relationships. If I have a flights table and a passengers table, I'd typically have a linking table (like flightmanifests) that records which passengers are on which flights. That's good database design but I wouldn't want an object model based directly on those three tables. What I'd want at the object level is a passenger object with a flights collection and a flight object with a passengers collection.

The Entity Framework lets you cope with both the issues mentioned and seems to be more likely to be the direction that Microsoft will keep heading. It also adds some interesting constructs in the ESQL language. However, its generic nature means you need to work with a very constrained set of data types. You lose the rich data types available with SQL Server in the trade off with the ability to write more generic code.

From the database end, LINQ to SQL can generate quite poor TSQL and the way people often use it ends up causing plan cache pollution issues, much the same as anyone using AddWithValue() to add parameters to a SqlParameters collection in ADO.NET would.

Every time I show people the TSQL code generated from some simple LINQ to SQL queries, I see two reactions. People with a developer background usually say "I'm glad I didn't have to write that". People with a database background say "No-one should write that - it's horrible code". And LINQ to Entities generates even more generic code (as you would imagine it needs to when it doesn't even know what database engine it's targeting).

This is usually all justified by increased developer productivity. "As long as it works well enough and it's done quickly, who cares if what's going on under the covers isn't great?" And that's 100% true for small or simple applications. However, the places I see Microsoft pushing this technology is to ISVs and large enterprise clients. These are likely to be the people it's least appropriate for.

The ADO.NET group seems to have adopted LINQ to SQL now but I know they really don't love it and the Entity Framework is what they're interested in. So I can't see LINQ to SQL having much of a future at all. LINQ to Entities is much more likely to stay around.

The other big issue I see in this area is maintenance. Microsoft have made big strides with the Database Edition of Visual Studio Team System but every time I ask questions about what the plans are for allowing it to "see" all the mapping files from these coding technologies, I get very blank stares back. Many DBAs can't make any changes to their databases today because they have a sea of Access applications all over the organisation and they don't know what will break when they change something. Are LINQ to SQL and Entity Framework mapping files going to be the next generation of Access-database-style management issues?




Source Click Here.

Microsoft does not provide any drivers to connect to Sybase. Sybase have their own drivers (only 32 bit) but they do not have 64bit. I cant understand why they dont provide the 64bit drivers. Flipside of that why dont Microsoft provide any drivers for SSIS.

If you are using Oracle or Teradata as your source you can have Attunity Connectors for FREE as long as you have got SQL Server Enterprise Licence.

Unfortunately Microsoft didn't sponse Sybase Connector from Attunity. There is another option available from Data Direct. You have to purchase both connectors from respectable vendors. These connectors are not cheap!!

Thanks
Sutha



Source Click Here.

 

Some of you are aware that I am writing an (almost) weekly commentary for a SQL Server Magazine e-newsletter called SQL Server Magazine UPDATE.  I was told that I could write about anything related to SQL Server, but that turned out not to be true. There has to be at least some technical or business value to my commentary. A few weeks ago, I wrote a commentary about my publishers' page count limitations on my books, and it was rejected. :-(

So I guess I I'll just have to post it here, since so far Adam and Peter haven't rejected anything I've written. :-)

 

In my commentary of August 14, I talked about "Too much information", and listed many different sources where you could get details about the features and behavior of SQL Server 2008. When I referred to "too much", I was talking about too many different places to have to look. I am currently faced with a problem of having too much information in one book: my upcoming SQL Server 2008 Internals book. However, it's not readers or information seekers saying that it is too much, it is the publisher.

When I sign a contract to write a book, the publisher always asks for a page estimate. However, before I start researching a new topic, I have no idea how many pages it will take to explain the topic well. For example, before I knew anything at all about SQL Server 2008 compression, I had to figure out how many pages I was going to write about it. I always tell my editors that I just can't say, and they reply that they just need an estimate, and not to worry. So I have always given a lower limit, trying to figure out the fewest number of pages I might need. I always assumed that the publisher wanted something in the contract that would assure that the book would have some substantial content, and not just be something fluffy written in a hurry to meet a deadline. So I always thought the estimate was a minimum. But I was wrong.

Suppose you have a contract to work for someone for 40 hours a week. Is that 'at most' 40 hours, or 'at least'? Is your boss going to tell you to go home once you've been there for 40 hours? If I have a contract to work 20 hours on a project for a fixed rate and end up working 25, is my client going to complain?

I was quite surprised when during my last month of writing my SQL Server 2008 Internals book, my project editor wrote to me and said we already had 70% of the page count for only about 50% of the chapters, and she hoped the rest of the chapters would be very short. In fact, the longest chapter had not been submitted yet. So I had a week of not writing while we tried to figure out what we could cut, and we tried to get the publisher to agree to increase the limit. It was very hard to get an increase, because the lower number was in my contract.

This got me thinking of a line from the 1984 movie Amadeus, when Emperor Joseph tells Mozart: "Your work is ingenious. It's quality work. And there are simply too many notes, that's all. Just cut a few and it will be perfect." And Mozart replies: "Which few did you have in mind?" http://www.imdb.com/title/tt0086879/quotes

The outline of the book had been carefully planned and all the co-authors and I had either finished writing or were almost finished. What researched, tested and polished information about SQL Server internals should we remove?

I finally came to a somewhat satisfactory arrangement with the editors for my Internals book (although it's still not clear if the introduction, foreword and index will count against the total or not), and I thought that I wouldn't have to worry about anything like that for a while, because not every publisher could be so short-sighted as to sacrifice quality by trying to reduce immediate costs. But again I was wrong. I am also working as a co-editor on a book of SQL Server tips and best practices, written by a group of over 40 SQL Server MVPs. We are not getting any payment of any kind, but will donate all our royalties to a children's charity. We're also doing all the technical editing among ourselves, so the publisher has very little work to do. But I just found out that we also will have a severe page limit on this book. So again we have to decide what to cut and what to keep.

I'm sure the publishers have their reasons for this limit (although it never came up on any of my earlier books), but it seems like reasons can always be re-evaluated. I'm just glad there are more places that my readers can look for information, so anything that doesn't fit in my book, my readers can find out more about in my blog, in a class, in a conference session, or in one of the "too many different places" that I have already told you about.

Happy Reading!

~Kalen




Source Click Here.

The way in which SQL Server has treated Time has been a very misunderstood subject for as long as I can remember. The largest part of that was the fact that SQL Server was only accurate to 3.33ms. I would like to point you to an excellent article on this and other details surrounding Datetime in SQL Server by Tibor Karaszi found here:  http://www.karaszi.com/SQLServer/info_datetime.asp  Well now with the release of SP3 for SQL Server 2005 (and of course SQL 2008 as well) SQL Server no longer uses the inaccurate techniques associated with the RDTSC component of Windows. Since there is a lot to explain about how this all works I will again point you to another link which goes into much greater detail.  http://blogs.msdn.com/psssql/archive/2008/12/16/how-it-works-sql-server-no-longer-uses-rdtsc-for-timings-in-sql-2008-and-sql-2005-service-pack-3-sp3.aspx  I for one say it is about time:).




Source Click Here.

I am already pretty booked for the entire first half of the new year.  March and April include three separate trips to Scandinavia and in June I head down under for a four-city tour in Australia. I've also got several public training classes scheduled.

Check out my schedule here.

There are quite a few open weeks in the first couple of months of the year, but during that time I will be writing a couple of whitepapers for Microsoft, managing several others, finishing the upgrade of my course to SQL Server 2008, planning for a preconference seminar at TechEd in Los Angeles and updating my web site. I think that should keep me busy.

I frequently get asked about my travel plans, and many people say something like: "Do you ever come to <insert a city name here>?" The answer is, there are only a couple of places I go to regularly, where I have public training partners, and those cities are Bellevue (Seattle area),  Beaverton (Portland area) and Minneapolis.  Other than that, I don't know where I will be going before I get scheduled to go there. And I get scheduled by someone asking me to come to their city or their company. Last year I taught private courses in Portland, Dallas, Austin, and Danbury, CT.  In addition to my regular training partners, I taught a public class in Brisbane, Australia. I gave a seminar in Kansas City, and a four-city roadshow in Sweden.  I didn't plan any of those; I was invited to come.

So if you would like my 5-day SQL Server Internals class to be offered in your city, talk to your boss. Several companies could get together to bring me in and share the cost.  Or you could get your local user group to arrange a one day seminar, which is what happened in Kansas City last year. (Unless there are several one day seminars, as I will be doing in Australia, a one day seminar is only possible in cities that are relatively easy to get to from Seattle.)

Although there are costs associated with the classes and seminars, I do frequently speak at local user group meetings for no charge. If you see that I will be in your city, and you'd like me to speak at your user group, just let me know!

Have a great new year!

~Kalen




Source Click Here.

You know about page splits... if a table has a clustered index, any new row has to be inserted into the appropriate page, based on the clustered index key order. However, if the page where the new row belongs is full, SQL Server will split the page into two pages, and then put the new row on one of the now half empty (I know, I know, I should say "half full") pages.

But what if the new row is bigger than 4000 bytes (half a page) in size? I always knew that sometimes a page might have to split into three to allow big rows to be inserted, but I just found out it could be worse than that. If a page is split into two, and the new row still won't fit, one of the half full pages is split again. But if the row is so big, it still won't fit, another split could occur. And if the row is still too big.... well, you get the idea. One of the SQL Server engineers at Microsoft made note of what he called a 'corner case' where one page split into 17!

I read about that right before I shut down my machine one night, but I couldn't stop thinking about it. I woke up early the next morning, and my first thought was, "Why am I such a geek?" But I am what I am, and I was still thinking about a 17 way split. I wondered if I could duplicate that...So I got up and started trying. I spent several hours on it, but the most I could get was one page splitting into 10. It's always good to have something to work towards. Maybe next time I'm on a long plane flight (see my next year's schedule for possibilities), I'll try some more.

So here's how I did it. I probably should have called the table Sybil, but I called it split_page instead.

You can use any database; I used tempdb.

-- Create the table
USE tempdb;
GO
SET NOCOUNT ON
GO
IF EXISTS (SELECT * FROM sys.tables
            WHERE name = 'split_page')
    DROP TABLE split_page;
GO
CREATE TABLE split_page
(id int identity(0,2) PRIMARY KEY,
id2 bigint default 0,
data1 varchar(33) null,
data2 varchar(8000) null);
GO

-- fill page until no more rows fit
INSERT INTO split_page DEFAULT VALUES;
GO 385

-- verify that there is only one data page
DBCC IND(tempdb, split_page, -1);
GO

DBCC IND should return one IAM page (PageType = 10) and one data page (PageType = 1). If you want to see how full the page is, you can run DBCC PAGE and look at just the header.  For your third parameter to DBCC PAGE, you should use the number for the data page that DBCC IND returned.

DBCC TRACEON(3604);
DBCC PAGE(tempdb, 1, 177, 0);
GO

The header has a value it it called m_freeCnt, and it told me there were only 11 free bytes left on the page, which is not enough for another one of these rows. (The INT primary key is always 4 bytes, and the BIGINT is always 8, so even without any overhead bytes, we need more than 11.)

-- Now insert one more row, this time filling the VARCHARs to the maximum length.
SET IDENTITY_INSERT split_page  ON;
GO
INSERT INTO split_page (id, id2, data1, data2)
      SELECT 111, 0, replicate('a', 33), replicate('b', 8000);
GO
SET IDENTITY_INSERT split_page  OFF;
GO

When you look at DBCC IND again, you should see that the table now has 10 data pages! (Plus one IAM page, and one index page for the clustered index root, which has PageType = 2)

DBCC IND(tempdb, split_page, -1);
GO

Here's my output:

image

(I really don't know how to make the screen shot any clearer. If you click on it, it should open in a browser window by itself and be a little easier to read. If I try to enlarge it when pasting it here, it becomes very blurred.)

Have fun!

~Kalen




Source Click Here.

I frequently do calculations against decimal values. In some cases casting decimal values to float ASAP, prior to any calculations, yields better accuracy. For example, in the following script both expressions should return 12100.0: SELECT POWER ( POWER ( 12100.0 , 0.01 ), 100 ), POWER ( POWER ( CAST ( 12100.0 AS FLOAT ), 0.01 ), 100 ) --------------------------------------- ---------------------- 13780.6 12100 Another example demonstrates poor accuracy of averaging decimals as compared to averaging...(read more)

Source Click Here.

Lies, damned lies, and statistics!

 

If you have read my three previous posts (1, 2, 3), you may walk away with an impression that on a drive presented from a high-end enterprise class disk array, Windows file fragmentation does not have a significant performance impact. And I've given you empirical data-oh yeah, statistics-to support that impression.

 

But that is not the whole story! No, I didn't lie to you. The numbers I presented were solid. It's just that the story is not yet finished.

 

In these previous posts on the performance impact of file fragmentation, I presented the I/O throughput data as the evidence. The arguments were valid, especially we did see file fragmentation causing the I/O throughput to degrade in a directly attached storage. But I/O throughput is but one I/O performance metric, and it is not enough to look at the I/O throughput alone. 

 

Let me start with an analogy. So suppose we have an eight-lane super highway going from New York City to Los Angles. As we pumping (okay, driving) more cars from NYC to LA, we take measure at a checkpoint in LA to find out how many cars are crossing that checkpoint every hour, i.e. we are measuring the throughput of the super highway. Now, instead of building the eight-lane super highway straight from NYC to LA, we have it take a detour via Boston. At that same checkpoint in LA, we again measure the throughput. Everything else being equal, we should get the same throughput.

 

However, for a given car, the trip from NYC to LA would take a lot longer on this detoured highway.

 

An I/O path is similar to a super highway. While its throughput is an important measure, how long it takes for an I/O request to complete-I/O latency or response time-is also an important measure. The question is, will file fragmentation take your I/O traffic for a detour?

 

Indeed, empirical test data show that when a file is severely fragmented, the maximum I/O latency of large sequential reads and writes (e.g. 256KB reads and writes) can suffer significantly. The following chart shows the impact of file fragmentation on the maximum I/O latency. The data were obtained from the same tests whose throughputs were reported in Part III of this series of posts.

Clearly, when the test file was fragmented into numerous 128KB disconnected pieces, some of the 256KB reads suffered terribly latency degradation. And if your applications happen to be issuing these I/Os, you would most likely experience a performance degradation regardless whether the I/O path can maintain the same I/O throughput.

 

Having some valid statistics may seem to add force to an argument, which makes it so much easier to be misleading if the whole story is not told, and technically everything is valid, and nobody is lying. By the way, this is a trick often employed by the vendors, who tend to conveniently ignore the bad news, or intentionally bury it with statistics on the good news. In my book, that would be lies, damned lies, and statistics.

 




Source Click Here.

You know about page splits... if a table has a clustered index, any new row has to be inserted into the appropriate page, based on the clustered index key order. However, if the page where the new row belongs is full, SQL Server will split the page into two pages, and then put the new row on one of the now half empty (I know, I know, I should say "half full") pages.

But what if the new row is bigger than 4000 bytes (half a page) in size? I always knew that sometimes a page might have to split into three to allow big rows to be inserted, but I just found out it could be worse than that. If a page is split into two, and the new row still won't fit, one of the half full pages is split again. But if the row is so big, it still won't fit, another split could occur. And if the row is still too big.... well, you get the idea. One of the SQL Server engineers at Microsoft made note of what he called a 'corner case' where one page split into 17!

I read about that right before I shut down my machine one night, but I couldn't stop thinking about it. I woke up early the next morning, and my first thought was, "Why am I such a geek?" But I am what I am, and I was still thinking about a 17 way split. I wondered if I could duplicate that...So I got up and started trying. I spent several hours on it, but the most I could get was one page splitting into 10. It's always good to have something to work towards. Maybe next time I'm on a long plane flight (see my next year's schedule for possibilities), I'll try some more.

So here's how I did it. I probably should have called the table Sybil, but I called it split_page instead.

You can use any database; I used tempdb.

-- Create the table
USE tempdb;
GO
SET NOCOUNT ON
GO
IF EXISTS (SELECT * FROM sys.tables
            WHERE name = 'split_page')
    DROP TABLE split_page;
GO
CREATE TABLE split_page
(id int identity(0,2) PRIMARY KEY,
id2 bigint default 0,
data1 varchar(33) null,
data2 varchar(8000) null);
GO

-- fill page until no more rows fit
INSERT INTO split_page DEFAULT VALUES;
GO 385

-- verify that there is only one data page
DBCC IND(tempdb, split_page, -1);
GO

DBCC IND should return one IAM page (PageType = 10) and one data page (PageType = 1). If you want to see how full the page is, you can run DBCC PAGE and look at just the header.  For your third parameter to DBCC PAGE, you should use the number for the data page that DBCC IND returned.

DBCC TRACEON(3604);
DBCC PAGE(tempdb, 1, 177, 0);
GO

The header has a value it it called m_freeCnt, and it told me there were only 11 free bytes left on the page, which is not enough for another one of these rows. (The INT primary key is always 4 bytes, and the BIGINT is always 8, so even without any overhead bytes, we need more than 11.)

-- Now insert one more row, this time filling the VARCHARs to the maximum length.
SET IDENTITY_INSERT split_page  ON;
GO
INSERT INTO split_page (id, id2, data1, data2)
      SELECT 111, 0, replicate('a', 33), replicate('b', 8000);
GO
SET IDENTITY_INSERT split_page  OFF;
GO

When you look at DBCC IND again, you should see that the table now has 10 data pages! (Plus one IAM page, and one index page for the clustered index root, which has PageType = 2)

DBCC IND(tempdb, split_page, -1);
GO

Here's my output:

image

(I really don't know how to make the screen shot any clearer. If you click on it, it should open in a browser window by itself and be a little easier to read. If I try to enlarge it when pasting it here, it becomes very blurred.)

Have fun!

~Kalen




Source Click Here.

I am already pretty booked for the entire first half of the new year.  March and April include three separate trips to Scandinavia and in June I head down under for a four-city tour in Australia. I've also got several public training classes scheduled.

Check out my schedule here.

There are quite a few open weeks in the first couple of months of the year, but during that time I will be writing a couple of whitepapers for Microsoft, managing several others, finishing the upgrade of my course to SQL Server 2008, planning for a preconference seminar at TechEd in Los Angeles and updating my web site. I think that should keep me busy.

I frequently get asked about my travel plans, and many people say something like: "Do you ever come to <insert a city name here>?" The answer is, there are only a couple of places I go to regularly, where I have public training partners, and those cities are Bellevue (Seattle area),  Beaverton (Portland area) and Minneapolis.  Other than that, I don't know where I will be going before I get scheduled to go there. And I get scheduled by someone asking me to come to their city or their company. Last year I taught private courses in Portland, Dallas, Austin, and Danbury, CT.  In addition to my regular training partners, I taught a public class in Brisbane, Australia. I gave a seminar in Kansas City, and a four-city roadshow in Sweden.  I didn't plan any of those; I was invited to come.

So if you would like my 5-day SQL Server Internals class to be offered in your city, talk to your boss. Several companies could get together to bring me in and share the cost.  Or you could get your local user group to arrange a one day seminar, which is what happened in Kansas City last year. (Unless there are several one day seminars, as I will be doing in Australia, a one day seminar is only possible in cities that are relatively easy to get to from Seattle.)

Although there are costs associated with the classes and seminars, I do frequently speak at local user group meetings for no charge. If you see that I will be in your city, and you'd like me to speak at your user group, just let me know!

Have a great new year!

~Kalen




Source Click Here.

I frequently do calculations against decimal values. In some cases casting decimal values to float ASAP, prior to any calculations, yields better accuracy. For example, in the following script both expressions should return 12100.0: SELECT POWER ( POWER ( 12100.0 , 0.01 ), 100 ), POWER ( POWER ( CAST ( 12100.0 AS FLOAT ), 0.01 ), 100 ) --------------------------------------- ---------------------- 13780.6 12100 Another example demonstrates poor accuracy of averaging decimals as compared to averaging...(read more)

Source Click Here.

Lies, damned lies, and statistics!

 

If you have read my three previous posts (1, 2, 3), you may walk away with an impression that on a drive presented from a high-end enterprise class disk array, Windows file fragmentation does not have a significant performance impact. And I've given you empirical data-oh yeah, statistics-to support that impression.

 

But that is not the whole story! No, I didn't lie to you. The numbers I presented were solid. It's just that the story is not yet finished.

 

In these previous posts on the performance impact of file fragmentation, I presented the I/O throughput data as the evidence. The arguments were valid, especially we did see file fragmentation causing the I/O throughput to degrade in a directly attached storage. But I/O throughput is but one I/O performance metric, and it is not enough to look at the I/O throughput alone. 

 

Let me start with an analogy. So suppose we have an eight-lane super highway going from New York City to Los Angles. As we pumping (okay, driving) more cars from NYC to LA, we take measure at a checkpoint in LA to find out how many cars are crossing that checkpoint every hour, i.e. we are measuring the throughput of the super highway. Now, instead of building the eight-lane super highway straight from NYC to LA, we have it take a detour via Boston. At that same checkpoint in LA, we again measure the throughput. Everything else being equal, we should get the same throughput.

 

However, for a given car, the trip from NYC to LA would take a lot longer on this detoured highway.

 

An I/O path is similar to a super highway. While its throughput is an important measure, how long it takes for an I/O request to complete-I/O latency or response time-is also an important measure. The question is, will file fragmentation take your I/O traffic for a detour?

 

Indeed, empirical test data show that when a file is severely fragmented, the maximum I/O latency of large sequential reads and writes (e.g. 256KB reads and writes) can suffer significantly. The following chart shows the impact of file fragmentation on the maximum I/O latency. The data were obtained from the same tests whose throughputs were reported in Part III of this series of posts.

Clearly, when the test file was fragmented into numerous 128KB disconnected pieces, some of the 256KB reads suffered terribly latency degradation. And if your applications happen to be issuing these I/Os, you would most likely experience a performance degradation regardless whether the I/O path can maintain the same I/O throughput.

 

Having some valid statistics may seem to add force to an argument, which makes it so much easier to be misleading if the whole story is not told, and technically everything is valid, and nobody is lying. By the way, this is a trick often employed by the vendors, who tend to conveniently ignore the bad news, or intentionally bury it with statistics on the good news. In my book, that would be lies, damned lies, and statistics.

 




Source Click Here.

LINQ to SQL allows you write your data access statements in your .Net code in LINQ which is then translated into SQL. LINQ supports many of the same constructs, predicates of SQL however some aren't supported.

One set of predicates that aren't supported are those for full text, i.e. CONTAINS, CONTAINSTABLE, FREETEXT, FREETEXTTABLE.

That means you can't write something like,

var mySearchList = from s in new SQLBitsDataContext().sessions

                  where s.Contains("Description,Title","simon")

                 select s;

You can however do

var mySearchList = from s in new SQLBitsDataContext().sessions

                  where s.description.Contains("simon")

                 select s;

So you might think that full text is supported. Unfortunately Contains is translated into a LIKE predicate with two wildcards, i.e.

select *

  from sessions

 where description like '%simon%'

So how do you get it working. Well the magic is in Inline Table Valued Functions. I've talked about them before when discussing performance of scalar functions. An inline table valued function is essentially a prameterised SQL statement, and importantly only one statement. This means the optimiser can merge the SQL Statement into the query that uses it.

To get it working you need to create a table valued function that does nothing more than a CONTAINSTABLE query based on the keywords you pass in,

create function udf_sessionSearch

      (@keywords nvarchar(4000))

returns table

as

  return (select [SessionId],[rank]

            from containstable(Session,(description,title),@keywords))

 

You then add this function to your LINQ 2 SQL model and he presto you can now write queries like.

var sessList = from s   in DB.Sessions

               join fts in DB.udf_sessionSearch(SearchText) on s.sessionId equals fts.SessionId

             select s;

If you want you can extend this to limit the number of results from containstable. If you want to search different columns then you will need different functions as the colum list used by containstable cannot take a parameter.



-



Source Click Here.

Watch the video and read the book in the genuis marketing campaign for Windows Home Server: http://www.stayathomeserver.com/MommySite/default.aspx

It did make me grin but I'd already bought it recently from the Microsoft Company Store on a whim.  I've installed it on an old Media Center PC and it sits quietly on my desk drawers backing up our family PCs in the early hours of the morning to an external USB drive.  I've been very impressed with its simplicity and while I've yet to test it in a "disaster recovery" scenario I'm confident it'll do what it says on the tin.

Its worth a look if the idea of a home backup/file server gets your attention.  If only so you don't have to worry about losing the family photo collection or your other half's music collection.  Yes, I've been there.  Apparently, there's an expectation if you work in IT that you're backing up everyone's PC in the house.  If you didn't know that then buy Windows Home Server before you get caught out! ;)

Christian Bolton  - MCA: Database, MCM: SQL Server, MVP
Database Architect
http://coeo.com - SQL Server Consultants

 



Source Click Here.

In the first of a series of articles on the tricks of tackling temporal data in SQL, Joe Celko discusses SQL's temporal data types and agonizes over the fact that, though there are ANSI/ISO Standards for temporal operations in SQL, every vendor has something different. He explains the mysteries of such things as time-zones, lawful time, UTC, CUT, GMT, CE, DST, and EST.

Source Click Here.