Monday 19 October 2015

Web API, Rest or RPC

Issue

I have a service method -

public String GetDeleteErrorLog()
{‏

...‏

I'm using the url http://lpsservices/api/ScheduledTask/DeleteErrorLog/

‎web api works fine‏

but when I add another method

method‏

public ScheduledTask GetTask(String id)‏

‎it always calls the 2'nd method‏ (and fails because the ID not present).

(BTW its not an object delete - its just the name of the process I return an id and then want to get the stauts of the task)

 

Cause

I contacted our REST Guru and here’s what he said:

Web Api pattern matches against controller methods to identify the correct method‏

‎Anything prefixed with a Get is obviously a Http Get by convention‏

‎so you have a ScheduleTaskController that resolves to ScheduleTask and...‏

‎it is verb based

so you call /lppservices/api/ScheduledTask‏

‎that is all‏

you are in RPC world with DeleteErrorLog‏ it thinks DeleteErrorLog is a parameter‏ because your route definition most likely has /api/{controller}/{id} where id is optional‏ therefore DeleteErrorLog resolves as the id parameter to your method‏.

WebApi does not like nouns‏ parameters are either behind the "?" e.g. ?id=DeleteErrorLog‏, /api/{controller}/{id}”

After he explained it again I understood the implication.  This essentially means that all the get methods must have different number of paramaters.  The “paradigm” seems to be that you are carrying out REST requests on an object.

But why – I went back to the GURU:

the issue with {action} is that it makes it very difficult to leverage http caching‏

‎15‎:‎29

as it managing expiration of urls after inserts, updates and deletes gets tricky‏

me
‎15‎:‎30

I'm starting an async process and returning the id (guid) so I can retrieve the status of that process - not really going to want http cacheing!‏

shay doherty
‎15‎:‎32

thats only if you use http level caching!, like you say you don't need it currently, but for some stuff in LPS HTTP Client Cache may become very useful if the client app does alot of repititive GET requests for lists of data‏

‎15‎:‎33

HttpClient supports http caching with the "Cache Cow" add on!‏

then managing expiration of those datasets maybe important‏

‎15‎:‎34

so it might come in handy‏”

 

Solution

It seems I may have drifted from the paradigm – perhaps my parameter should be the name of the service – but the ID is a string too.  Perhaps I am “PUT” ing as I;m creating the task – but just GETing the (task) token.

However, to solve the problem I have added a new route (in WebApiConfig).

config.Routes.MapHttpRoute(
               name: "ActionApi",
               routeTemplate: "actionapi/{controller}/{action}/{id}",
               defaults: new { id = RouteParameter.Optional }
           );

So now any url begining with /actionapi/ rather than /api/ will match the controller (class) and the action (method).

You can mark the actions too e.g. [ActionName("DeleteErrorLog")] or just let them be the same as the method name.

Conclusion

If you want to use explicit actions just change your url.

Watch out for certificate expiration in database mirroring endpoints.

When database mirroring is used between two servers that are not in a domain, as can often be the case when using hosting services from another vendor, certificates are used to secure endpoints.

The MSDN article - Example: Setting Up Database Mirroring Using Certificates (Transact-SQL) explains how to do this.

What it does not mention is that SQL server generated certificates only last a year by default.  A year later all your mirroring starts to fail!

The following event log error helped me diagnose the situation -

“…Connection handshake failed. The certificate used by this endpoint was not found: Certificate expired. Use DBCC CHECKDB in master database to verify the metadata integrity of the endpoints. State 85…”

To rectify you have to generate new certificate(s), copy them across the mirrored servers,  and change the endpoint setting to use the new certificates.

This is explained here -http://www.sql-server-performance.com/faq/database_mirror_certificate_expired_p1.aspx

Note –

1) I had to specify a start date when specifying an end date else I got a warning.

2) I used the following command to change the certificate -

ALTER ENDPOINT Endpoint_Mirroring FOR DATABASE_MIRRORING (AUTHENTICATION = CERTIFICATE cert_Mirror_01)

I then had to restart the mirroring process again for all databases as I’d disable the mirroring.

So – if your creating certificates for mirroring endpoints – give them a nice big expiration date!

Report Viewer in Azure

Kept getting an issue that when deployed to azure could not find Microsoft.ReportViewer.Common v11.0.0.0 but when I tried adding to project then this files was not listed in the extensions.  Eventually I added the dll directly from the GAC (as a file) with copy local and bingo.

Do the same for Microsoft.ReportViewer.ProcessingObjectModel.

FYI – you can see these in C:\Windows\assembly but when you come to add in visual studio you need to browse the sub-directory given in the properties – MSIL.

Moving Twitter to a background thread.

In my previous blog I added some simple twitter integration to tweet the entries on I’m never going back there.  This was being done in same thread as the users http request – which is not great as the API calls to twitter could be slow or fail.

So, my 30 minute task this week was to move it to a background thread.

[more]

Design

The design is fairly simple:

  1. Create a flag on the comment to indicate if it has been tweeted.
  2. Write code that retrieves all un-tweeted comments, tweets them and updates their status.
  3. Execute this code in a background thread as each comment is added and on application start up.

Without some locking (such as a distributed transaction with serialized read isolation level) this could lead to duplicate twitter entries if two comments were submitted simultaneously.

However, twitter does not allow duplicate tweets so the second tweet should be rejected by twitter.

Database changes

Since we’re now in production we need to script our database changes.

Adding extra column and changes to insert stored procedure:

ALTER TABLE tblComment ADD bTweeted bit not null default 0
GO
ALTER PROCEDURE [dbo].[usp_CommentInsert]
(
    @p_vcrComment nvarchar(1000),
    @p_intVenueId INT,
    @p_vcrUserUpdated  MediumString
)
AS
DECLARE @ident INT
INSERT INTO tblComment VALUES(@p_vcrComment,
    @p_intVenueId,
    1,
    @p_vcrUserUpdated,
    GetDate(),
    GetDate(),
    0)
SET @ident = SCOPE_IDENTITY()
SELECT @ident
GO

We will also need to update our comments -

CREATE PROCEDURE usp_CommentUpdate
(
    @p_intCommentId INT,
    @p_vcrComment nvarchar(1000),
    @p_intVenueId INT,
    @p_vcrUserUpdated  MediumString,
    @p_tweeted bit
)
AS
UPDATE tblComment
SET vcrComment = @p_vcrComment,
intVenueId = @p_intVenueId,
vcrUserUpdated = @p_vcrUserUpdated,
dteLastUpdated = GetDate(),
bTweeted = @p_tweeted
WHERE
intCommentId = @p_intCommentId
GO
GRANT EXEC ON usp_CommentUpdate TO VICTOR
GO

And fetch un-tweeted comments:


CREATE PROCEDURE usp_CommentGetUntweeted
AS
SELECT * from tblComment c
INNER JOIN tblVenue v
ON c.intVenueId = v.intVenueId
INNER JOIN tblVenueType t
ON v.intVenueTypeId = t.intVenueTypeId
WHERE  (c.bTweeted = 0 AND blnApproved=1)
GO
GRANT EXEC ON usp_CommentGetUntweeted TO VICTOR
GO

Changes to the Business objects

The Comment class has an extra attribute to indicate if it has been tweeted.

Changes to the Data Access Layer (DAL).

The data access layer changes :

  1. Populate the new attribute on the class as the object is created from a data reader (amended create method described in previous blog "Ditching the Entity Framework".).
  2. Expose methods for the new update and search stored procedures.

Changes to the Business Processes

A couple of new methods here to handle the threading -

public static void TweetCommentsAsync()
       {
           ThreadPool.QueueUserWorkItem(new WaitCallback(TweetComments));
       }

       public static void TweetComments(Object stateInfo)
       {
           ICollection<Comment> commentsToTwwet = DBComment.GetUntweetedComments();
           foreach (Comment c in commentsToTwwet)
           {
               TweetComment(c);
           }
       }

 

The TweetCommentsAsync() method will sets the TweetComments method to be called on a thread from the thread pool.  The runtime controls these threads, normally limiting the number of active threads at any one time to prevent too much context switching.  I believe 25 threads per CPU used to be the normal allocation in ASP.NET – but this is from memory.

An overloaded version of ThreadPool.QueueUserWorkItem allows data to be past to the method to be invoked.

Obviously the TweetComment method is also updated to update the database.

TwitterResponse<TwitterStatus> resp = TwitterStatus.Update(tokens, tweet);
                   if (resp.Result == RequestResult.Success)
                   {
                       // update teh comment
                       c.Tweeted = true;
                       UpdateComment(c);
                       ret = true;
                   }

Obviously we call the TweetCommentsAsync each time a comment is added and also from the application start-up code in the Global.asax

Conclusion

ThreadPool.QueueUserWorkItem provides a simple, safe mechanism of performing asynchronous operations for example, when we want a thread to process as quickly as possible to optimise user responsiveness.

How to specify dates in sql scripts.

Always use the ISO 8601 date format

yyyy-mm-ddThh:mm:ss[.mmm]
You must specify everything except the [.mmm] e.g. 2014-12-24T00:00:00
 

How I create history or audit tables.

OK – not too complex this but quick and easy.

 

Firstly I create a history table – to do this I get SQL management studio to generate a create script for the table I want to audit.  I add the postscript “History” to the table name, remove the constraints and indexes and add in 2 extra columns, one for the type of update and one for the date/time e.g.

 

CREATE TABLE [dbo].[tblProductCodeHistory]
(
    chrChangeType nchar(1) NOT NULL,
    dteChangeTime DateTime NOT NULL,
    [intProductId] [int]  NOT NULL,
    [intSalesAccountId] [int] NOT NULL,
    [intSalesTaxCodeId] [int] NOT NULL,
    [vcrDescription] [nvarchar](100) NULL,
    [intBusinessId] [int] NOT NULL,
    [decNetPrice] [money] NULL,
    [vcrProductCode] [nvarchar](100) NOT NULL,
    [decCost] [money] NULL,
    [intSupplierAccountId] [int] NULL,
    [intProductTypeId] [int] NOT NULL,
    [intQuantity] [decimal](10, 3) NOT NULL ,
    [intUserId] [int] NULL,
    [dteUpdated] [datetime] NULL,
    [vcrUnit] [dbo].[ShortString] NULL DEFAULT (NULL),
) ON [PRIMARY]
GO

Then I create a couple of triggers on the original table as follows -

CREATE TRIGGER [dbo].[ProductCode_Delete]
   ON  [dbo].[tblProductCode]
   AFTER DELETE
AS
BEGIN
    -- SET NOCOUNT ON added to prevent extra result sets from
    -- interfering with SELECT statements.
    SET NOCOUNT ON;
    INSERT INTO tblProductCodeHistory
SELECT 'D',GetDate(),* FROM deleted
    -- Insert statements for trigger here

END

GO
/****** Object:  Trigger [dbo].[Transaction_Update]    Script Date: 08/11/2009 16:54:16 ******/
CREATE TRIGGER [dbo].[ProductCode_Update]
   ON  [dbo].[tblProductCode]
   AFTER UPDATE
AS
BEGIN
    -- SET NOCOUNT ON added to prevent extra result sets from
    -- interfering with SELECT statements.
    SET NOCOUNT ON;
    INSERT INTO tblProductCodeHistory
SELECT 'U',GetDate(),* FROM deleted
    -- Insert statements for trigger here

END

 

Once you’ve done this once then just do a search and replace to create the next set of trigger e.g. replace ProductCode in the above script with MyTable.

One thing to bear in mind is if you have a username in the original table then you might want to update this as part of the delete stored proc so the last username gets in the history table.

A colleague of mine has been promising to create a script to create this off the sysobjects – but I’m not holding my breath!

A quick bit of twitter integration.

OK – so it looks like I’m not going to replace Facebook or Twitter with my social networking site I’m Never Going Back There.   After a full day in production there’s only been one comment.

So – if you can’t beat them, join them.  Time for a bit of integration.

[more]

So – if I can post entries to twitter with a link to the site and get people to follow the twitter account then this should give me lots of links which means lots of Google ranking.

I also want to be able to get people to authenticate with there twitter account and be able to post entries to twitter.

But, this is a spare time project!  So – first a bit of twitter integration.

So I had a quick dig around on the twitter developers pages.  Seems their API is in some flux between REST and streaming API’s.  I decided I didn’t want to get involved in any low level stuff so picked Twitterizer from the list of twitter integration libraries.

Posting a tweet

My first goal was to create a twitter entry for each comment.

The Code

I downloaded the DLL’s and put them in a new folder in our Framework classes – where we reference shared components such as the MS Enterprise blocks.

I added references to them from my shiny business layer created when I ditched Entity Framework.

I registered an new twitter user @imnevergoing and registered my application using this account.  I then downloaded my two application keys (Consumer Key and Consumer Secret) and the two OAuth access tokens. 

The first two tokens authenticate my application.  The second two allow me to authenticate @imnevergoing without directing a user to authenticate on the twitter site.  The posts will always posted as @imnevergoing.

The code is pretty simple.  I form my authentication key as instructed in the twitter documentation here and for a tiny URL back to my site using the code I borrowed from “Shortening URLs Using TinyUrl API in .net” .

I then simply form my post text and post the comment!

 

    public static bool TweetComment(Comment c)
    {
        bool ret = false;

        try
        {
            OAuthTokens tokens = new OAuthTokens();
            tokens.ConsumerKey = ConfigurationSettings.AppSettings["consumerKey"];
            tokens.ConsumerSecret = ConfigurationSettings.AppSettings["consumerSecret"];
            tokens.AccessToken = ConfigurationSettings.AppSettings["oauth_token"];
            tokens.AccessTokenSecret = ConfigurationSettings.AppSettings["oauth_token_secret"];

            String url = MakeTinyUrl("http://www.imnevergoingbackthere.com/");
            StringBuilder sb = new StringBuilder();
            sb.Append(c.TheVenue.Name);
            sb.Append(" ");
            sb.Append(c.TheVenue.Town);
            sb.Append(" ");
            sb.Append(c.CommentText);

            String tweet = sb.ToString(0, Math.Min(sb.Length, 140 - (url.Length + 1)));
            tweet = String.Format("{0}-{1}", tweet, url);

            TwitterResponse<TwitterStatus> resp = TwitterStatus.Update(tokens, tweet);
            if (resp.Result == RequestResult.Success)
            {
                // update teh comment
            }
        }
        catch
        {
            // log the error
        }

        return ret;
    }
    public static string MakeTinyUrl(string Url)
    {
        try
        {
            if (Url.Length <= 30)
            {
                return Url;
            }
            if (!Url.ToLower().StartsWith("http") && !Url.ToLower().StartsWith("ftp"))
            {
                Url = "
http://" + Url;
            }
            var request = WebRequest.Create("
http://tinyurl.com/api-create.php?url=" + Url);
            var res = request.GetResponse();
            string text;
            using (var reader = new StreamReader(res.GetResponseStream()))
            {
                text = reader.ReadToEnd();
            }
            return text;
        }
        catch (Exception)
        {
            return Url;
        }
    }
}

I then call this after saving the comment to the database, and, since it’s in my business process layer, this will happen whatever the client (web service, web page …).

[DataObjectMethodAttribute(DataObjectMethodType.Insert, true)]
       public static int InsertComment(Comment comment)
       {
           int ret = 0;
           ret = DBComment.InsertComment(comment);
           TweetComment(comment);
           return ret;
       }

The Tao

Thus spake the master programmer:

``Though a program be but three lines long, someday it will have to be maintained.''

The Tao Of Programming

This may look fine but its not particularly great as

  1. It’s done synchronously as part of the web request.  This could take  a significant amount of time.
  2. If it fails it will never be tweeted.
  3. I would like a link the actual comment as more meaningful and probably better for SEO– but I don’t have the view yet!

Much better plan is to extend the object/database to indicate if the comment has been tweeted and then on each comment add and website start-up kick off a thread to retrieve all the un-tweeted comments and tweet them.

I’ll cover this in a separate blog.

Follow Me

This should be pretty easy -  a simple button from the twitter goodies site in the master page should do the trick.

Tweet this.

Ideally I’d like to be able to tweet each individual comment – but I haven’t got the view or time for this yet.

So I settle for two buttons on the master page – a tweet this button and an “add this” button – which covers a lot of other social networking sites.

The twitter button can be built from twitter’s goodies and the “Add this” from Add This"!

A couple of options for server side word doc generation

Traditionally, generating Word documents on the server has involved installing Word on the server and using com interop to generate the documents.  This is unsupported and has licensing issues.  Many have tried and got it to work though.

Newer versions of word are xml based, 2013 using Office Open XML (OOXML), an XML based format developed my Microsoft.

Challenge

Client was using automation of a Office 7 client running as a logged in user to generate documents from templates.  The client had upgraded their office to 2013 so was looking for a way to generate as a proper service.

Simple Substitution

The first option was to do simple string substitution.  A quick Google (other search engines are available) found this soltion. “Use OpenXML to create a Word document from a docx template”.

Downloaded and installed the Open XML sdk and toolkit.

First thing I did was try and convert the existing word document.  So I opened it with Word 2013 and saved as docx.  I was suprised that I couldn’t open it with visual studio – expecting a big lump of xml, but it turns out that the docx is in fact a zipped directory of files – which contain the xml.

You can view the internal files by renaming the docx to zip and unzipping.

public static void SearchAndReplace(string document,
            String output,
            Dictionary<string, string> dict)
        {
            // note - the file is copied first as
            // docx is actually a zip of loads of files.
            // for production  would want to delete on failure.
            File.Copy(document, output);
            document = output;
            using (WordprocessingDocument wordDoc = WordprocessingDocument.Open(document, true))
            {
                string docText = null;
                using (StreamReader sr = new StreamReader(wordDoc.MainDocumentPart.GetStream()))
                {
                    docText = sr.ReadToEnd();
                }

                foreach (KeyValuePair<string, string> item in dict)
                {
                    Regex regexText = new Regex(item.Key);
                    docText = regexText.Replace(docText, item.Value);
                }

                using (StreamWriter sw = new StreamWriter(
                   wordDoc.MainDocumentPart.GetStream(FileMode.Create)))
                {
                    sw.Write(docText);
                }
            }
        }

I pretty much copied the code from above into a utility class. The only change I made was to introduce a destination file.  You’ll notice I copy source to destination.  Initially I tried to just writing the amended data – but this is just one file from the entire docx structure.  For production I would use an intermediate location to ensure onward processing does not pick up a file that has not actually succeeded or yet had the substitutions done.

When I ran the test it complained about the document format.

So I tried to open the docx using the “Open XML productivity Tool” – it also complained.   So I created an empty document and copied and pasted the contents – test code below..

//TODO work out how to read this from config in a test.
        private static string routeInputDir=@"C:\DocService\TEMPLATES";
        private static string routeOutputDir = @"C:\DocService\TEMPLATES";
       
        /// <summary>
        /// This tests that the document was generated using substitution.
        /// </summary>
        [TestMethod]
        public void BasicTestForm63ABySubstitution()
        {
            String inputFile = String.Format("{0}\\{1}", routeInputDir, "FORM 63A.docx");
            String outputFile = String.Format("{0}\\{1}", routeOutputDir, "My document test out.docx");
            WordDocumentGenerator.SearchAndReplace(inputFile,
                outputFile,
                GetSubstDict());

        }

        private static System.Collections.Generic.Dictionary<string, string> GetSubstDict()
        {
            return new System.Collections.Generic.Dictionary<string, string>{
                {"\\[Lands of\\]","Stuart McLean"},
                {"\\[Registered Owner\\]","Stuart "}
                };
        }

 

Quite simply – it worked.  As you can see I had to escape character the regex a bit.

Using the Open XML Productivity tool.

I opened the document using the tool and pressed the “reflect code” and took the code into a new class.

 

image

 

Added a member for the substitution paramaters and changed the CreatePackage method to take as an extra parameter.

 

private Dictionary<string, string> m_dict;

      // Creates a WordprocessingDocument.
      public void CreatePackage(string filePath,
          Dictionary<string, string> dict)
      {
          m_dict = dict;
          using (WordprocessingDocument package = WordprocessingDocument.Create(filePath, WordprocessingDocumentType.Document))
          {
              CreateParts(package);
          }
      }

I then did a search for the text I wanted to replace and set to the relevant dictionary entry.

text7.Text = "Registered Owner";

Becomes

text7.Text = m_dict["Registered Owner"];

Test then simply creates the class and calls the method -

private static System.Collections.Generic.Dictionary<string, string> GetSubstDict()
        {
            return new System.Collections.Generic.Dictionary<string, string>{
                {"\\[Lands of\\]","Stuart McLean"},
                {"Registered Owner","Stuart "},
                {"\\[Name\\]","Jo Bloggs"}
                };
        }

        /// <summary>
        /// This tests that the document was generated.
        /// </summary>
        [TestMethod]
        public void Form63AByGenerated()
        {
            Form63A form = new Form63A();
            String outputFile = String.Format("{0}\\{1}", routeOutputDir, "documentgenout.docx");

         form.CreatePackage(outputFile,
             GetSubstDict()
             );
        }

Conclusion

Both methods are pretty simple. 

Simple substitution is probably easier to maintain if the templates are changing frequently but may have issues with regexes and complex substitutions (tables, images etc.).

Generated code is harder to maintain if the templates change frequently but will be able to handle complex substitutions.  Also, much of the content is common (styles etc.) so this could probably be hived off to a common class to make this easier to maintain.

Windows 7 and reporting services

I recently upgraded from vista to windows 7.   It all went pretty well and I was particularly impressed with the way it transferred my settings from my old laptop.  Let’s face it, after the catastrophe that was Vista, anything was going to be better.

Anyway, I had problems with reporting services which I was using to test some reports for Fusion Accounts.  I usually use Firefox but there is a page in the reports application to change the connection string of a data source that does not work in Firefox.

When I connected with IE8 I did not seem to have any permissions.  I tried switching off the automatic windows authentication in IE8 but this did not seem to have any effect.

Eventually I remembered that in Vista I had switched off all the run as administrator checking stuff but had decided to leave it on in windows 7.

So I found IE8 in the menu structure, right clicked on it and selected “Run as administrator”.  Bob was then my uncle.

I’m sure soon I’ll get fed up with the security check soon and turn it off.  It took me about 20 minutes earlier to work out how to edit my hosts file with notepad!

Worried about Audit Logout

I must have been staring at SQL Server profiler on and off for 10 years looking for durations of greater than 100.

It’s always worried me that are many audit logout events like the one below have such high numbers -

Audit Logout        .Net SqlClient Data Provider        web_user   0    3733    0    656    700    52   

656  – that’s a lifetime.

Seems I was right to ignore them though!

According to MS

Duration - Contains the length of time between the login event and the logout event.

At last I can sleep easy.

Setting the process identity on all app pools.–IIS 6

Why

When constrained delegation is put on a machine all the worker pools must be set to use the identity of the account for which constrained delegation has been set.

What you need

A copy of iis_adsutil.vbs,
This script below (also in zip) that you can hack on the fly:

Set fso = CreateObject("Scripting.FileSystemObject")
Set ts = fso.OpenTextFile("appools.txt", 1)
   
Set ipList = CreateObject("Scripting.Dictionary")
   
Do Until ts.AtEndOfStream
    ' Read line from file
    sLine = ts.ReadLine
    'CreateAppPool sLine
    SetAppPoolId sLine
   
Loop
   
ts.Close



Sub SetAppPoolId(strAppPool)
Set objAppPools = GetObject("IIS://localhost/W3SVC/AppPools")
Set objPool = GetObject("IIS://localhost" & strAppPool)
'configurable identity
objPool.AppPoolIdentityType = 3
'set username
objPool.WAMUserName = "domain\account"
'set password
objPool.WAMUserPass = "passoword"
objPool.SetInfo
end Sub

Sub CreateAppPool(strAppPool)
Set objAppPools = GetObject("IIS://localhost/W3SVC/AppPools")
Set objAppPool = objAppPools.Create("IIsApplicationPool", strAppPool)
objAppPool.SetInfo
'Set objPool = GetObject("IIS://localhost/W3SVC/AppPools/" & strAppPool)
'configurable identity
'objPool.AppPoolIdentityType = 3
'set username
'objPool.WAMUserName = "USERNAME"
'set password
'objPool.WAMUserPass = "passoword"
'objPool.SetInfo
end Sub
sub SetAppPool(ObjApp)
set vDir = GetObject("IIS://localhost/W3svc/1/Root/" & ObjApp)
vDir.Put "AppPoolID", ObjApp
vDir.SetInfo
WScript.Echo "AppPool " & vDir.AppPoolId
end Sub

What to do

This isn’t step by step stuff – I expect you to apply some grey matter.
A the command prompt do iis_adsutil.vbs ENUM /P W3SVC/apppools > appools.txt
This creates a text file with all the app pool names.
Edit the file to remove the leading ‘[‘ and trailing ‘]’.
Run the script above which reads the file and setts the WAMUserName and WAMUserPass on the app pool.

Additional Steps
The  account will also need to be put into the IIS_WPG group.
The same account needs permission (READ/LIST Folder) on the C: Windows/Temp folder on the server.
 

The ultimate paging strategy?

Scene

Analyst: “ the users want to see a list of widgets they’ve got“

Developer:”OK”

Sometime later …

Manager: ”The widget list is taking them ages to load – can you take a look at it?”.

Developer:”How many widgets have they got now?”

DBA:”3000”.

Developer:”Better turn on paging in the grid control then”.

Sometime later…

Manager: ”The widget list is taking them ages to load – can you take a look at it?”.

Developer:”How many widgets have they got now?”

Analyst:”300 000”.

Developer:”Move the paging to the database then”.

Sometime later…

Manager: ”The widget list is taking them ages to load – can you take a look at it?”.

DBA:”Also, my database server is running hot – can I have some more CPU’s?”.

Developer:”How many widgets have they got now?”

Analyst:”30 0000 000”.

Developer:”I’ll have a look later when I’ve finished updating my cv”.

[more]

Standard Paging strategies

  Advantages Disadvantages
Don’t page Easy No good once the number of rows exceeds 100 or so.
Off the shelf grid paging e.g. using GridView. Fairly painless to implement. Once total number of rows exceeds a few hundred becomes inefficient as all the data is loaded and processed in the web application on each page request.
“Off the shelf” database paging. Takes load off web server and performs set based operations on database which is better at it. Becomes inefficient with very large data sizes, particularly when requiring a total row count.
Out of the box solutions – such as LINQ integration – can be inefficient as they run the same query twice – once to find the total rows and once to get the data

My strategy

  • Do the paging in the database which is optimised for set based operations
  • Limit the number of rows that you will ever query
  • Use JSON services and javascript to minimise data transfer overheads – making your page quick and lean.

Over the years I’ve learnt it’s always best to “do the right thing” at the start.  You may have some tight deadline that you’re being told the future of the world depends on – but my advise is spend an extra hour getting it right in the beginning – you’ll save your self a day down the line.

At the core of my strategy is a limit on the number of rows that are queried.  This does involve a slight UI compromise – the user needs to accept that on some occasions they will get a message which instead of “Rows 50 – 100 of 10,0000 rows matching your filter” then may get “Rows 50 – 100 of more than 10,0000 rows matching your filter”.

The 10,000 – or max rows ever counted (“maxrows”) – introduces scalability into the design as we will only every filter this number.   Note – 10,000 is an arbitrary number I’ve chosen here – the number you use depends on the complexity and efficiency of your filter and the amount data you are searching.  The strength of the strategy is that the “maxrows” can be changed by the dba for each query.

So – here’s my starting strategy for paging.  I’m going to start with the database and work my way up to the UI. 

Database paging with row limit.

Here is a complete sample stored procedure.

ALTER PROC [dbo].[usp_FuelStopsGetForUser]
(
    @vcrUserName nvarchar(255),
    @vehicleId INT,
    @fuelTypeId INT, -- IGNORED AT THE MOMENT
    @garage MediumString,
    @startRow INT,
    @numberToFetch INT,
    @rowcount INT output,
    @rowCountExceeded BIT output
)
AS
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
DECLARE @maxrows INT

set @maxrows = 100

SET @rowcount=0
    SELECT top(@maxrows+1) ROW_NUMBER() OVER (ORDER BY fs.dteStopDate DESC) AS 'Row',
    fs.intFuelStopId,
    max(fs1.intFuelStopId) AS intLastId,
    SUM(fs1.decLitres) AS TotalLitres,
    fs.dteStopDate
    INTO #tblFuelStop 
    FROM dbo.tblUserVehicle uv
    INNER JOIN tblFuelStop fs ON fs.intUserVehicleId = uv.intUserVehicleId
    LEFT OUTER JOIN tblFuelStop fs1 ON
        fs1.intUserVehicleId = fs.intUserVehicleId AND
        fs1.dteStopDate < fs.dteStopDate  AND
        fs.decODOKm IS NOT NULL AND fs1.decODOKm IS NOT NULL AND
        fs1.intPercentFull = 100  AND fs.intPercentFull = 100 AND
        fs.decLitres IS NOT NULL   
    WHERE uv.vcrUserName = @vcrUserName
    AND (@vehicleId = uv.intVehicleId OR @vehicleId IS NULL)
    AND (@garage IS NULL OR fs.vcrGarageName like @garage + '%')
    GROUP BY fs.intFuelStopId,
    fs.dteStopDate
    -- remove any with null litres
    UPDATE #tblFuelStop
    SET intLastId = NULL,
    TotalLitres = NULL
    WHERE intFuelStopId IN (
    SELECT fs.intFuelStopId FROM dbo.tblUserVehicle uv
    INNER JOIN tblFuelStop fs ON fs.intUserVehicleId = uv.intUserVehicleId
    INNER JOIN #tblFuelStop fst ON fs.dteStopDate BETWEEN fst.dteStopDate AND fst.dteStopDate
    WHERE uv.vcrUserName = @vcrUserName
    AND fs.decLitres IS NULL)

    SET @rowcount = @@ROWCOUNT

    IF(@rowcount > @maxrows)
        set @rowCountExceeded = 1
    ELSE
        SET @rowCountExceeded = 0

    SELECT top(@numberToFetch) fs.*,
        CASE WHEN 0 = (fs.decODOKm - fs1.decODOKm) THEN NULL ELSE 100 * (fs.decLitres + fsf.TotalLitres)/(fs.decODOKm - fs1.decODOKm) END AS decLitresPer100KM ,
        fs1.intUserFuelStopId AS intSince       
    FROM #tblFuelStop fsf
    INNER JOIN tblFuelStop fs
    ON fsf.intFuelStopId = fs.intFuelStopId
    LEFT OUTER JOIN tblFuelStop fs1 ON fs1.intFuelStopId = fsf.intLastId
    WHERE Row > @startRow

Strategies:

  • Use a strongly typed stored procedure – it’s efficient, secure and maintainable.  There is simply no substitute!
  • Standard paging paramaters - @startRow INT,    @numberToFetch INT  @rowcount INT output,– for integration with grid views etc.
  • @rowCountExceeded BIT output -  indicates more then the rowcount rows are there – we just weren’t prepared to query them.
  • SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED – this basically says if there is data that has not yet been committed so might role back – then I’ll show it anyway.  Unless your toting up transaction totals to decide on weather to issue a mortgage – your users wont care and you will probably never see any “phantom” rows.  So give your database a break and turn down the isolation level so you don’t have to take read locks on all the rows.
  • set @maxrows – this allows you to limit the amount of filtering you will ever do and can be changed as the data sizes etc. require.
  • SELECT top(@maxrows+1)  - however much data you stuff in the database your query should take the same amount of umph!
  • ROW_NUMBER() OVER (ORDER BY fs.dteStopDate DESC) AS 'Row' – allows us to create a row number on the fly so we can select the page of data we want.
  • Use temporary table - INTO #tblFuelStop  - this means that we only run our expensive filter once and get the total rowcount for free (SET @rowcount = @@ROWCOUNT )
  • Do any expensive calculations only on the data your returning:
    SELECT top(@numberToFetch) fs.*,
            CASE WHEN 0 = (fs.decODOKm - fs1.decODOKm) THEN NULL ELSE 100 * (fs.decLitres + fsf.TotalLitres)/(fs.decODOKm - fs1.decODOKm) END AS decLitresPer100KM ,
            fs1.intUserFuelStopId AS intSince       
        FROM #tblFuelStop fsf
        INNER JOIN tblFuelStop fs
        ON fsf.intFuelStopId = fs.intFuelStopId
        LEFT OUTER JOIN tblFuelStop fs1 ON fs1.intFuelStopId = fsf.intLastId
        WHERE Row > @startRow

Standard integration with gridviews – if you must

Personally I’m now avoiding gridviews like the plague as they generate loads of thick html, viewstate and other foul matters – but if you must (maybe because you’ve already got one) – then the single stored proc above allows a single call to get rowcount and data – off the shelf integration techniques seem to do two calls with twice the pain!

This requires a bit of fudge in the object data source code as below:

private int invoiceCount;

/// <summary>
      /// Get the invoice count from the search.
      /// </summary>
      /// <returns></returns>
      public int GetInvoiceCount(int businessId,
          DateTime? startDate,
          DateTime? endDate,
          String customer,
          int? invoiceReference,
          int status,
          int invoiceType,
          string purchaseOrderNumber,
          string dataField1,
          string dataField2,
          string dataField3,
          Decimal? minAmountPaid,
          Decimal? maxAmountPaid,
          Decimal? minFCAmountPaid,
          Decimal? maxFCAmountPaid,
          String customerCurrency,
          Decimal? minTotalGross,
          Decimal? maxTotalGross,
          Decimal? minFCTotalGross,
          Decimal? maxFCTotalGross,
          Decimal? minOCTotalGross,
          Decimal? maxOCTotalGross,
          String transactionCurrency)
      {
              return invoiceCount;
      }

      [DataObjectMethodAttribute(DataObjectMethodType.Select, true)]
      public DataSet GetInvoicesWithPaging(int businessId,
          DateTime? startDate,
          DateTime? endDate,
          String customer,
          int? invoiceReference,
          int status,
          int invoiceType,
          string purchaseOrderNumber,
          string dataField1,
          string dataField2,
          string dataField3,
          Decimal? minAmountPaid,
          Decimal? maxAmountPaid,
          Decimal? minFCAmountPaid,
          Decimal? maxFCAmountPaid,
          String customerCurrency,
          Decimal? minTotalGross,
          Decimal? maxTotalGross,
          Decimal? minFCTotalGross,
          Decimal? maxFCTotalGross,
          Decimal? minOCTotalGross,
          Decimal? maxOCTotalGross,
          String transactionCurrency,
          int maxRecords,
          int startIndex           
          )
      {
          DataSet set = DBInvoice.GetInvoicesWithPaging(businessId,
              startDate,
              endDate,
              customer,
              invoiceReference,
              status,
              invoiceType,
              purchaseOrderNumber,
              dataField1,
              dataField2,
          dataField3,
          minAmountPaid,
          maxAmountPaid,
          minFCAmountPaid,
          maxFCAmountPaid,
          customerCurrency,
          minTotalGross,
          maxTotalGross,
          minFCTotalGross,
          maxFCTotalGross,
          minOCTotalGross,
          maxOCTotalGross,
          transactionCurrency,
          maxRecords,
          startIndex,
          out invoiceCount
          );
          return set;
      }

The GetInvoicesWithPaging is always called by the gridview first so we set the invoice count as part of this call (out invoiceCount ) and then in the subsequent simply return the value we already have.

(How civilised).

When I experimented with LINQ this really hurt – the same generated query was being called twice – the second time with a simple count.  When querying a few million rows this was pretty painful – AVOID!

Expose as web/REST service

By exposing our data as a REST service – with the data transported as JSON (comma separated to old guys like me) we minimise the network load of paging so making our UI fast and efficient.  HTML is big and fat, json is small and lean.

Also you’ve get a web service for your app that allows others to integrate.

Since I now employ this pattern extensively I make use of a template base class for the results:

/// <summary>
   /// Base classs for search results
   /// </summary>
   [Serializable]
   public  class ResultsBase<T>
   {

           /// <summary>
           /// The rowcount
           /// </summary>
           public Int32 rowCount;

       /// <summary>
       /// Indicates the rowcount exceeded the number requested.
       /// </summary>   
       public Boolean rowCountExceeded;

       public T[] Items;
   }

Here is the web services for the stored proc described earlier.

/// <summary>
   /// Summary description for FuelStopsWS
   /// </summary>
   [WebService(Namespace = "http://www.myfuel.com/")]
   [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)]
   [System.ComponentModel.ToolboxItem(false)]
   // To allow this Web Service to be called from script, using ASP.NET AJAX, uncomment the following line.
   [System.Web.Script.Services.ScriptService]
   public class FuelStopsWS : System.Web.Services.WebService
   {

public class FuelStopSearchResults : ResultsBase <FuelStop>
       {
       }

       [WebMethod]
       [PrincipalPermission(SecurityAction.Demand, Authenticated = true)]
       public FuelStopSearchResults GetFuelStopsForUser(int? vehicleId,
               int? fuelTypeId,
           String garage,
               Int32 startRow,
               Int32 numberToFetch)
       {
           FuelStopSearchResults ret = new FuelStopSearchResults();
           String user = Context.User.Identity.Name;
           int rowCount = 0;
           Boolean rowCountExceeded = false;
           ret.Items = FuelStopManager.GetFuelStops(user,
               vehicleId,
               null,
               garage,
               startRow,
               numberToFetch,
               out rowCount,
               out rowCountExceeded).ToArray();
           ret.rowCount  = rowCount;
           ret.rowCountExceeded = rowCountExceeded;
           return ret;
       }

  • Note I use   [System.Web.Script.Services.ScriptService] tag to expose REST services as described in blgo

And the associated business process layer:

public class FuelStopManager
   {

/// <summary>
        /// Get the fuel stops for a given user matching the search criteria
        /// </summary>
        /// <param name="userName"></param>
        /// <param name="vehicleId"></param>
        /// <param name="fuelTypeId"></param>
        /// <param name="garage"></param>
        /// <param name="startRow"></param>
        /// <param name="numberToFetch"></param>
        /// <param name="rowCount"></param>
        /// <param name="rowCountExceeded">Indicates there are more then rowCount matches.</param>
        /// <returns></returns>
        public static ICollection<FuelStop> GetFuelStops(
            String userName,
            int? vehicleId,
                int? fuelTypeId,
            String garage,
                Int32 startRow,
                Int32 numberToFetch,
                out Int32 rowCount,
                out Boolean rowCountExceeded)
        {
            rowCountExceeded = true;
            rowCount = 0;
            ICollection<FuelStop> ret =null;
            ret = DBFuelStops.GetFuelStops(userName, vehicleId,
                fuelTypeId,
                garage,
                startRow,
                numberToFetch,
                out rowCount,
                out rowCountExceeded);
            return ret;
        }

}

And data access layer:

internal static ICollection<FuelStop> GetFuelStops(string userName, int? vehicleId, int? fuelTypeId, string garage, int startRow, int numberToFetch, out int rowCount, out bool rowCountExceeded)
       {
           Database db = PetrolDatabaseFactory.PetrolDB;
           IList<FuelStop> ret = new List<FuelStop>();
           rowCount = 0;
           rowCountExceeded = true;
           IDataReader reader = db.ExecuteReader("usp_FuelStopsGetForUser",
              new Object[] {userName,
              vehicleId,
              fuelTypeId,
              garage,
              startRow,
              numberToFetch,
              rowCount,
              rowCountExceeded });
           using (reader)
           {
               while (reader.Read())
               {
                   ret.Add(CreateFuelStop(reader));
               }
           }

           return ret;
       }

Javascript paging

Once the REST service is available we can use a combination of Microsoft AJAX and jquery (or just jquery) to retrieve and page through our data.

Sample 1 – jquery mobile showing first 10

This sample shows the first 10 user fuel stops (always).  The relevant javascript and ajax includes are on the master page described in a previous blog.

<%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<dynamic>" %>

<asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server">
    Fuel Stops
</asp:Content>
<asp:Content ID="Content3" ContentPlaceHolderID="PageTitle" runat="server">
    Your recent fuel stops.
</asp:Content>
<asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server">
<script type="text/javascript" language="javascript" src="/PetrolServices/FuelStopsWS.asmx/js"></script>
<script type="text/javascript" language="javascript">
    $(document).ready(function () {
        // Get the fuel stops for a user - their history
        PetrolServices.FuelStopsWS.GetFuelStopsForUser(null, null, null, 0, 10, fuelStopsSuccess, fuelStopsFailed,complete);
    });

    function complete(){$('ddlVehicleSelect').listview('refresh');
    } 

    function fuelStopsSuccess(data) {
        var list = "";
        var addUrl = "<%= Url.Action("Add", "FuelStop") %>" + "/";
        for (var i = 0; i < data.Items.length; ++i) {
            var fs = data.Items[i];
            list += "<li data-role=\"list-divider\">"
            list += "<a href=\"" + addUrl + fs.FuelStopId + "\" rel=\"external\">Stop: "+fs.UserFuelStopNumber + " - " + fs.Garage + " Date - " + fs.StopDate.toDateString()  + "</a></li>";
            list += "<li>Quanity: "+ fs.Litres + " Litres, " + fs.GallonsString + " Gallons</li>";
            list += "<li>Price: "+ fs.PricePerLitre + " per litres " + fs.PricePerGallonString + " per Gallons</li>";
            list += "<li>Total: " + fs.TotalCostString + "</li>";
            list += "<li>MPG: "+ fs.MPGString + " Litres Per 100KM: " + fs.LPer100KMString + " Since stop " + fs.MPGSinceString + "</li>";

        }
        $("#dvFuelStops").empty();                              
        $("#lvFuelStops").append(list);              
        $("#lvFuelStops").listview();
        $("#lvFuelStops").listview('refresh');

    }

    function fuelStopsFailed() {
        alert("Could Not Fetch Fuel Data,");
    }
</script>
<ul data-role="listview" data-inset="true" data-theme="c" data-dividertheme="b" id="lvFuelStops">
<li data-role="list-divider">Fuel Stops</li>
<li><a href='<%= Url.Action("Add", "FuelStop") %>' rel="external">Add</a></li>
<div id="dvFuelStops" />
</ul>
</asp:Content>

The list displayed to the user is built  on the fly from the json retreived.

Sample 2 – with paging.

This sample was used to replace a RadGrid with paging– and has filters at the top of each column.   Again, the rows are built on the fly (to match the existing radgrid style) as are the page links – which sets the current page and refreshes the grid:

for (var i = startPage; i < endPage; ++i) {
            pages += "<a href=\"#\" onclick=\"currentPage=" + i + ";GetClients(); return false;\" class=\"rgCurrentPage\" ><span>" + (i + 1) + "</span></a>";
        }

Notice, also, how we show the user if our rowcount was exceeded:

if (results.rowCountExceeded) {
       pageInfo = "&nbsp;Page <strong>" +
       (currentPage+1) + "</strong> of more than <strong>" +
       (opdiv(results.rowCount, rowsPerPage)+1) +"</strong>, items <strong>" +
       ((currentPage * rowsPerPage) +1) + "</strong> to <strong>" +
       (currentPage * rowsPerPage) + rowsPerPage
       "</strong> of more than <strong>" + results.rowCount + "</strong>.";    
   }
   else {
       pageInfo = "&nbsp;Page <strong>" +
       (currentPage + 1) + "</strong> of <strong>" +
       (opdiv(results.rowCount, rowsPerPage)+1) +"</strong>, items <strong>" +
       ((currentPage * rowsPerPage) + 1) + "</strong> to <strong>" +
       ((currentPage * rowsPerPage) + rowsPerPage)
       "</strong> of <strong>" + results.rowCount + "</strong>.";
   }

Here is the complete listing:

<%@ Control Language="C#" AutoEventWireup="true" CodeFile="Clients.ascx.cs" Inherits="Clients_Controls_Clients" %>
<script type="text/javascript" language="javascript" src="/commonscripts/StionaScripts.js" ></script>
<telerik:RadCodeBlock runat="server">
<script type="text/javascript" language="javascript" >
    window.$ = $telerik.$;
    $(document).ready(function() {
        currentPage = 0;
        GetClients();
    });
    // the current page (0 based)
    var currentPage;
    var rowsPerPage = 50;
    function GetClients() {
        $("#clientTable > tbody").empty();
        var count=0;
        $("RadAjaxLoadingPanel1").attr("display","");
        BusinessHierarchyWS.Clients.GetClients(
            $("input#vcrName").val(),
            $("input#txtAddress").val(),
            $("input#txtAccountantName").val(),
            currentPage * rowsPerPage,
            rowsPerPage,
            ClientsGot);   
    }
    function clearFilters()
    {
        $("input#vcrName").val("");
        $("input#txtAddress").val("");
        $("input#txtAccountantName").val("");
        currentPage = 0;
        GetClients();
    }
    function PageControl(totalrows)
    {
        var startPage = currentPage - 5;
        var endPage = currentPage + 5;       
        if(startPage < 0){
            startPage = 0;
            endPage = currentPage + 5 -(currentPage - 5);
        }
        var lastPage = opdiv(totalrows, rowsPerPage);
        var more = false;
        if(endPage > lastPage){
            endPage = lastPage;
        }       
        else{
            more = false;
        }
        var pages;
        pages = "";
        for (var i = startPage; i < endPage; ++i) {
            pages += "<a href=\"#\" onclick=\"currentPage=" + i + ";GetClients(); return false;\" class=\"rgCurrentPage\" ><span>" + (i + 1) + "</span></a>";
        }
        return pages;
    }
    function Search(){
        currentPage = 0;
        GetClients();
    }

    function SelectRow(checkboxId) {
        //GET THE status of the one clicked
        var chkBox = $("input[name=" + checkboxId + "]");
        var status = chkBox.attr('checked');
        // first clear them all
        $("#clientTable input:checkbox").attr('checked', false);
        // set the status of the one clicked
        chkBox.attr('checked', !status);
    }
    function ClientsGot(results) {
        var rows;
        rows = "";
        for (var i = 0; i < results.clients.length; ++i) {
        // alter the class for every other row
        var rowClass = ((i % 2) == 0) ? "rgRow" : "rgAltRow";
        var item = results.clients[i];
        var addressLine;
        addressLine = "";
        addressLine = stringAppend(addressLine, item.AddressLine1);
        addressLine = stringAppend(addressLine, item.AddressLine2);
        addressLine = stringAppend(addressLine, item.AddressLine3);
        var row = "<tr class=\"" + rowClass + "\" onmousedown=\"checkRightMouseClick(event,'ID_" + item.ClientId + "\')\"  onclick=\"SelectRow(\'ID_" + item.ClientId + "\')\"  >";
        row+="<td><input id=\"" + item.ClientId + "\" type=\"checkbox\" value=\"" + item.ClientId + "\" name=\"ID_" + item.ClientId + "\" onclick=\"this.checked=!this.checked\" /></td>";
        row += "<td>"+item.Name+"</td>";
        row+="<td>" + addressLine + "</td>"
        row+="<td>" +item.Accountant.Name +"</td></tr>";
        rows += row;
    }
    // insert the rows
    $("#clientTable > tbody").append(rows);
    // now set right button event
    var menu = $find("ctl00_RadContextMenu1");
    var tbody = $get("clientTable");
    //tbody = tbody["tbody"];
    menu.addTargetElement(tbody);
    var pageInfo;
    if (results.rowCountExceeded) {
        pageInfo = "&nbsp;Page <strong>" +
        (currentPage+1) + "</strong> of more than <strong>" +
        (opdiv(results.rowCount, rowsPerPage)+1) +"</strong>, items <strong>" +
        ((currentPage * rowsPerPage) +1) + "</strong> to <strong>" +
        (currentPage * rowsPerPage) + rowsPerPage
        "</strong> of more than <strong>" + results.rowCount + "</strong>.";    
    }
    else {
        pageInfo = "&nbsp;Page <strong>" +
        (currentPage + 1) + "</strong> of <strong>" +
        (opdiv(results.rowCount, rowsPerPage)+1) +"</strong>, items <strong>" +
        ((currentPage * rowsPerPage) + 1) + "</strong> to <strong>" +
        ((currentPage * rowsPerPage) + rowsPerPage)
        "</strong> of <strong>" + results.rowCount + "</strong>.";
    }
    $("#clientTable div#pageInfo").empty();
    $("#clientTable div#pageInfo").append(pageInfo);
    var pageText = PageControl(results.rowCount);
    $("#clientTable div#pages").empty();
    $("#clientTable div#pages").append(pageText);
    $("RadAjaxLoadingPanel1").attr("display","none");

}

function checkRightMouseClick(e, clientId) {
    if (navigator.appName == 'Netscape'
           && e.which == 3) {
        SelectRow(clientId);
    }
    else {
        if (navigator.appName == 'Microsoft Internet Explorer'
          && event.button == 2)
            SelectRow(clientId);
    }
    return true;
}
</script>
</telerik:RadCodeBlock>
<asp:ScriptManagerProxy  runat="server" ID="scriptManager">
                <Services>
                    <asp:ServiceReference path="/BusinessHierarchyWS/Clients.asmx"  />
                </Services>
</asp:ScriptManagerProxy>

<div id="ctl00_ContentPlaceHolder1_ucClients_m_clientGrid" class="RadGrid RadGrid_FusionGrid">

<asp:Panel ID="pnlClientList" runat="server" Visible="true">
    <h3>
        <asp:Localize ID="m_clientsFieldSet" runat="server" Text="Select Client"></asp:Localize></h3>
<table cellspacing="0" class="rgMasterTable" border="0" id="clientTable" style="width:100%;table-layout:auto;empty-cells:show;">
    <colgroup>

        <col  />
        <col  />
        <col  />
        <col  />
    </colgroup>
<thead>
        <tr class="rgCommandRow">
        <td class="rgCommandCell" colspan="4">
<div class="filterclearline">
  <div class="filterclearlineleft">
    <input type="submit" name="btnSearch" value="Search"  class="buttons" onclick="Search();return false;" ID="btnSearch"/>
  </div>
  <div class="filterclearlineright">
  <input id="chkRememberFilter" type="checkbox" name="chkRememberFilter" />
  <label for="chkRememberFilter">Remember Filter</label>
  <input type="submit" name="btnShowAll" value="Clear all filters" id="btnShowAll" class="buttons" onclick="clearFilters();return false;" />   
  </div>
</div>
        </td>
        </tr>
        <tr class="rgPager" style="text-align:right;">
            <td colspan="4"><table cellspacing="0" border="0" style="width:100%;">
                <tr>
                    <td class="rgPagerCell NumericPages">
                    <div class="rgWrap rgNumPart" id="pages">
                    </div>
                    <div class="rgWrap rgInfoPart" id="pageInfo">                        
                    <!-- the page details goes here -->
                    </div></td>
                </tr>
            </table></td>
        </tr>
        <!-- title row -->
        <tr>
            <th scope="col" class="rgHeader">&nbsp;</th>
            <th scope="col" class="rgHeader">Name</th>
            <th scope="col" class="rgHeader">Address</th>
            <th scope="col" class="rgHeader">Accountant</th>
        </tr>
        <!-- the filters -->
        <tr class="rgFilterRow">
            <td>&nbsp;</td>
            <td style="white-space:nowrap;"><input name="vcrName" type="text" size="10" id="vcrName" class="rgFilterBox" onkeypress="if((event.keyCode == 13 || event.keyCode == 20)) {Search(); return false;}" /></td>
            <td style="white-space:nowrap;"><input id="txtAddress"  name="txtAddress" type="text" size="10"  class="rgFilterBox" onkeypress="if((event.keyCode == 13 || event.keyCode == 20)) {Search(); return false;}" /></td>
            <td style="white-space:nowrap;"><input id="txtAccountantName" name="txtAccountantName" type="text" size="10" class="rgFilterBox" onkeypress="if((event.keyCode == 13 || event.keyCode == 20)) {Search(); return false;}"  /></td>
        </tr>
    </thead><tfoot>
        <tr class="rgPager" style="text-align:right;">
            <td colspan="4"><table cellspacing="0" border="0" style="width:100%;">
                <tr>
                    <td class="rgPagerCell NumericPages">
                    <div class="rgWrap rgNumPart" id="pages">                       
                    </div>
                    <div class="rgWrap rgInfoPart" id="pageInfo">                        
                    <!-- the page details goes here -->
                    </div></td>

                </tr>
            </table></td>
        </tr>
    </tfoot>
    <tbody>
    <!-- clients go in here -->
    </tbody>
    </table>
</div>

What’s that windows service called?

If your wondering what a service is called so you can issue a net stop to it – use

sc query

This shows you the names of all services.  You can also query an individual service and issue start and stop commands.

Do sc /? for help.

When you have a large download

In .NET 1.x there is a default maximum download time of 90s and for > 2.x this is 110s.  However, this only applies to release code (debug=”false”) so make sure to test in release mode too!

In some circumstances, such as when serving a large file from .aspx, you may want to increase this.

This can be done using the executionTimeout attribute httpRuntime element in the web.config.

Example – set the timeout to 10 minutes:

 

<configuration>
  <system.web>
<httpRuntime executionTimeout="600"/>
</system.web>
</configuration>

This cannot be set on an individual page with a location tag but can be done in a directory with a web.config in that directory.

Scripting in Management Studio–delete sproc check if exists

SSMS 2014 seems to have lost the check fore exists before drop on scripting a stored proc.

You can turn this back on.

Yes to fix Tools -> Options -> SQL Server Object Explorer -> Scripting -> Check for object existence -> True

Personally, our code must be SQL 2005 compatible so I change that option as well as well as script triggers and indexes and disable scripting of USE DATABASE.

image