How to handle ctl00_ContentPlaceHolder in JavaScript

If you develop in ASP.NET, you’ll notice when you run your web app, that your server side components get a name similar to “ctl00_ContentPlaceHolder” prepended to the name you assigned them.  To further frustrate things, that prefix is not guaranteed to be that from build to build.  So, how do you reference such an object from JavaScript?

Assume you have an image object on your page.  At design time, the HTML for the image might look like this:

<

asp:Image ID="LogoImage" runat="server" ImageUrl="~/Images/logo.png" />

Here’s a “compiled” and running HTML version of what an object might look like in the runtime HTML:

<img id="ctl100_ContentPlaceHolder1_LogoImage" src="/Images/logo.png" style="border-width:0px"/>

The problem is you don’t want to reference the object in JavaScript like this:

document.getElementById("ctl100_ContentPlaceHolder1_LogoImage")

Instead, do this:

document.getElementById("<%=LogoImage.ClientID%>")

Now your JavaScript is guaranteed to work even if the prefix changes with later compiles or updates to Visual Studio.

TomTom Go 630 Review

image

The Good:

  • Above average features:

    • 2D or 3D maps.
    • Change the cursor to one of about 7 or so car icons (or the default arrow).
    • Select from several different color schemes for the map and the course (you can clearly see the course line, unlike the Magellan GPS units whose course lines are light green on a light greenish-tan background). I previously had a Magellan 4070 until a little more than a week ago when it and a Cobra radar detector were stolen out of my car in my driveway!
    • Speed Limits.
    • Map Corrections:  This is a GREAT feature.  You can add names to streets, change names, change speed limits, delete streets, change locations of POIs (Points Of Interest).
      • You can upload these changes to TomTom and they’ll validate, then share with all other TomTom users.
      • You can download corrections from others.
      • When you make a correction, you have the option of entering a description of the change for the TomTom cartographers if it’s something other than a street name change, deletion, or change of POI.
    • Bluetooth to make it a wireless speaker phone by connecting wirelessly to your Bluetooth enabled phone.
    • Voice recognition.
    • A handful selection of different voices to choose from.
    • Large LCD screen.
    • Traffic light and Speed camera warnings
      • Though, it seems that they only show an icon on the screen and give no audible warning, which makes it mostly useless.
      • It comes with a preset of locations pre-installed, but you get to do one free update on the web to get the latest.  After that, you’ll have to purchase a subscription for changes.
    • Can connect to the internet (via a USB connection to your computer) to download the latest maps as often as you want.
    • Your choice of alphabetized (default) keyboard or QWERTY keyboard.  Unfortunately, it defaults to alphabetized keyboard.  After searching, I found you can actually change it to QWERTY.  There’s no excuse for any device to provide ONLY (or default to) an alphabetized keyboard since everyone these days types on a QWERTY keyboard.
    • Smart course plotting called “IQ route”.  Depending on the day and the time of day you plot your course, you may get different routes because of known traffic patterns.  If you plot a course during say, rush hour, it will try to route you around known locations for slow moving traffic.  Route the same course for late Sunday night, and you’ll get a different (and presumably quicker) route.  This is a feature that’s been lacking in most GPS units for a long time, but severely needed.
    • Option to bypass unpaved roads.
    • Save itineraries.
    • They’ve opened up this product to developers and have released and SDK so you can write your own apps for it in C++.
    • App store.
    • Backup your data to your PC.
    • TomTom HOME PC application to hook up your device to your PC and do all sorts of cool stuff with it.  This is where you access the app store, backup, send/receive map corrections, etc…
    • If you run a company that deals with transportation, you can customize this for your company’s use and set up a server where your users can update their TomTom Go 630’s from.
    • Plan routes for driving, biking, or walking.  Obviously, walking routes will take you through parks and such (off road).

 

  • Standard stuff you expect:

    • Spoken voice, turn-by-turn directions
    • Shows where you are on the map as you’re driving
    • Current speed
    • Compass
    • Volume control
    • POI (Points of interest)
    • Favorites
    • Standard mini-USB port to connect to your computer.
    • On screen keyboard.
    • Reroute around detours.
    • Options to bypass toll roads.
    • Bright colors for viewing in daylight
    • Dark colors for viewing at night.
    • Road Hazard warnings (via an FM receiver).
    • Search by name.
    • Itineraries (set multiple waypoints on a larger journey).
    • Bypass toll roads.

 

The Useless:

  • Photo uploads (for slideshows)
  • MP3 music player

 

The Bad:

  • The mounting of the GPS unit onto the windshield hanger is difficult.
  • Correction (2009-08-18) on the statement below.  It does have search by name… It’s just difficult to find.
    • [The following statement is false] It lacks a very important “Search by name” feature.  For example, if you know the name of a place, but not where it is, you’re out of luck.  This is a critical feature and there’s no excuse for not having it.  Both Garmin and Magellan GPS units have this feature.
  • Volume is too low, even at its maximum setting, even after turning off “adjust for background noise”.
  • It’s a little bulky, having just come from owning a Magellan Maestro 4070, which was flat, front and back, and pretty thin, this one is curved on the back and about twice as thick as the Magellan Maestro 4070, but compared to the Garman Street Pilot C130 (which is shaped like an old CRT TV at about 4 inches deep) it’s reasonably thin.  Can easily fit it into my pants pocket.

 

Extra (paid extra) features:

  • Celebrity voice such as Dennis Hopper, Burt Reynolds, and Homer Simpson can be purchased and downloaded.  They range from about $5 to $13.
  • Applications:  This is very cool.  There’s an actual “apps store” where you can download all kinds of apps (and pay for them) for things like recording your path as you drive for export to Google Earth KML files and such.  I’ve often wondered why others don’t have this feature considering these GPS units are basically pocket computers that can do A LOT more than just navigate.
  • Traffic Light and Speed cameras (both fixed and mobile) updates.  You have to subscribe to this service to continuously get the latest.
  • Live Traffic.
  • By default, it does NOT switch between daytime and nighttime colors.  There is a setting, buried deep inside the menu system to enable this though.

 

Conclusion:

Overall, this is a very nice, windshield or dash mounted GPS navigation system.  The ability to make your own corrections, share with others, and receive corrections is awesome.  The feature to plot a course based on when you’re driving is very nice, especially if conditions change as you’re driving, it can suggest alternatives.  This is much more than "just a GPS" compared to the other 2 I’ve had (Garman Street Pilot C130 and Magellan Maestro 4070).  TomTom seems to be really on the ball with milking as much out of it as possible and going even further by opening it up to developers.  I’m gunna download the SDK and play around with it.  I will report back on that after I’ve written an app.  The more I use this thing, the more I like it.

Uninstalling Windows Live Family Safety Filter

If you’ve installed Windows Live Family Safety Filter and have since decided you want to uninstall it, you’ll find that the following items you need to uninstall it don’t exist:

  1. A “Windows Live Family Safety Filter” folder or icon under the programs menu in the start menu.
  2. In control panel, a listing for “Windows Live Family Safety Filter” under “uninstall a program”.

So, how the heck are you supposed to uninstall it?  Like so:

  • Go to Control Panel (There are so many different ways to get to the control panel with all the popularly used versions of Windows, I’ll leave it up to you to know how to do this in your own version).
  • Choose “Uninstall a program”.
  • Double click on “Windows Live Essentials:

image 

  • Choose “Uninstall” on the “Uninstall or repair your Windows Live programs” then click “Continue”.

image 

  • Put a check mark beside ONLY “Family Safety” (You’re choosing what you want REMOVED).  Then click “Continue”.

image 

Then, you’ll finish through these two windows…

image image

That’s it!

Dish Network gift cards

Occasionally I’ll have a gift card or something that I can’t use myself, but that I’ll get a discount on my own service if someone else uses it.  I’ve got 2 one-time-use gift cards from Dish Network.  The images are below.  To use either of them, follow the directions I’ve posted just below them.  If you don’t mind, let me know once you’ve used one (and which one) so I can take it off this blog.

GiftCard1 GiftCard2

DishNetwork Gift Card (copied from the text on the back of the cards):

This special Gift Card entitles you to DISH Network Satellite TV for the whole house
(up to 4 rooms) with Digital Advantage including:

• FREE Activation w/ Gift Card (a $99 value)*
• $80 CREDIT on your first bill*
• FREE HBO* and Starz for 3 months (up to a $66 value)*
• FREE HD DVR Equipment Upgrade

Visit your local participating Retailer or call 1-800-920-GIFT (4438)

*Requires 24-month commitment.
Early concellation fee applies.
Expires 1/31/09.  No cash value.
Includes Standard Professional Installation.
Gift Card may be valid for other promotions.

Deploying a Click Once app to multiple environments

Problem:

You can’t move a "Click-Once" deployed application from one environment (or server) to another.

Details of problem:

If you work in a normal IT shop, you probably have multiple, duplicated environments set up for your code. For example a typical set of environments would be for:

1. Development

· Used by the developers while developing their code.

2. Staging/QA/Testing

· The developer usually places (or requests System Admins to place) their release candidates here for users to test and validate.

3. Production

· Once the users have validated the release candidate in the staging environment, the system administrators are asked to move it to production or the "Live" environment.

Each of the 3 environments usually have duplicated database servers and web servers, and potentially duplicated network shares and any other resources needed by an application.

If your work environment has good security procedures put in place, your developers probably don’t have access to make changes to the staging environment and they almost certainly don’t have access to making changes to the production environment. This is a win-win for everyone involved: It makes those in charge of IT security happy and it gives the developers plausible deniability when something goes wrong in an environment that they don’t have access to.

Most web applications are deployed, usually by the developer, to the development environment. Once the developer is happy with the code there, they will ask the system administrators to move that code to staging. The key here is that it’s the SAME binaries that the developer put in the development environment. The system administrator will likely make changes to the web.config file to change the database connection string(s) to point to the staging database(s) instead of the development one(s) and any other config changes needed to make the staging code use the staging resources and not the development resources. This connection string, for security reasons, is usually not provided to the developer. Later, after users test on staging, the system administrator will then move that same code to production, making the appropriate changes to the config files.

This process of moving the same binaries from environment to environment is critical to ensuring that what gets moved from one environment to the next is the actual code that was tested in the prior environment. Hence the problem…

To deploy a Click-Once application, you have to provide, at build time, from within Visual Studio, the URL from where the end users will be launching the application. YOU CANNOT CHANGE THIS AFTER DEPLOYMENT!!! This means, the deployment model described above, that is in common use, cannot work with a Click-Once deployed app. Once you build and publish from Visual Studio, the "launch from URL" is in the deployed files and the deployed files are CRC’d as part of the Click-Once deployment. When a user runs the application, one of the first things that happen is that the .NET Click-Once technology on the end user’s machine kicks in and validates that all the files are in the original condition they were in when deployed from Visual Studio by validating the CRC. If it detects that a config file (or any file) has been altered, it throws a security exception and will refuse to run the application.

The "Solution":

I put the word "solution" in quotes to imply that this is not really a solution. It is a work around that does not resolve the problem of needing to deploy a single binary image from environment to environment. Only Microsoft can fix that problem. Instead, this is a description of how you have to deploy to multiple environments and the deployment changes your system administrators will have no choice but to adopt. This is not a preference, but a technical requirement. There is no alternative with Visual Studio 2008. I have been told, BTW, that this will be fixed in Visual Studio 2010.

Here’s what you do:

After you’ve deployed to your development environment, take a snapshot of your code in whatever method you prefer; whether that’s making a copy of the entire solution to another sub-folder, or checking it into your source control repository and labeling it, or whatever other method you can concoct. You must do this because later, when you deploy to other environments, a significant amount of time may have passed and you may have continued on with developing newer features and your code will not be the same. You’ll need the older snapshot to rebuild to the staging environment, then again to the production environment.

Later, after your own testing in development, when you’re ready to deploy a release candidate, you’ll need to rebuild from that snapshot codebase, but making only the appropriate changes to make it build for the new environment (the "launch from URL" and the config file changes). Then again, some time later, after the users have tested and validated the staging code, you’ll need to rebuild again for the production environment, using that same snapshot, making the appropriate "launch from URL" and config file changes.

Alternate Solution:

Note: Most system administrators would not want to do this alternate solution, but it is a more secure strategy and removes the blame for differences in environments off your shoulders.

Pre-requisites:

· The system administrator who will be performing the move needs:

o The same version of Visual Studio you used to create it.

o All of the 3rd party and custom add-ons (like Telerik, LLBLGen, Oracle, DB2, or other database Providers and drivers, etc…) installed and configured.

o Access to your source control system or a copy of the snapshot of your source code.

o Knowledge of how to build and deploy Click-Once applications using Visual Studio.

o A compilable, error free, snapshot of your code.

1. When you ask the system administrator to move your code from dev to staging, you instead provide them with your snapshot of the source code, whether it’s a labeled version in your source code repository or a copy in a folder somewhere.

2. The sysadmin then configures, builds, and deploys your code to the staging environment.

3. Later, when the users have validated your staging deployment, you then ask the system administrator to redeploy that snapshot to production, where the admin will make the necessary config changes and build and deploy to production.

This actually requires less work for the developer, is more secure, and gives a little more insurance to the customer that what they tested is actually what’s being deployed to production. At the very least, it removes the blame off the developer’s shoulders if something is changed between environments.

Quicken 2009 Bugs

This is just an online, public bug report about bugs in Quicken 2009. I’m hoping that publishing them will quicken (pardon the pun) Intuit in fixing them.

See Also

Here are the bugs I’ve found so far:

  • When setting up a new credit card account to download transactions, after it’s successfully connected, the “Account Setup” dialog box has some display problems and looks like it’s hiding some information:

image

  • Renaming Rules: This is quite an annoying bug. I personally do not want Quicken to rename my payees, yet there seems to be NO WAY to prevent Quicken from doing so. I participated in 3 online tech support chats and 2 call-back phone support incidents in the last week. NONE of their suggestions worked AND they refuse to accept that this is a bug. Here’s the problem: When you download transactions using PC Banking, then go and accept your transactions, Quicken will suggest renaming rules… actually, it will DICTATE renaming rules. You cannot tell it “No”. Furthermore, the dialog box that pops up informing you of the new dictatorial renaming rules being forced on you, has a check box that says something like “don’t inform me of renaming rules again”. There are 2 problems with this. 1. I believe that checking it only causes to Quicken to not inform you of new renaming rules, but it’ll still make new renaming rules. You only have 2 buttons “Apply” and “Cancel”. If you click “Apply”, it’ll apply the rule(s) that it’s showing you. I think that’s the only way to enforce the checkbox for “don’t tell me anymore”. If you click “cancel”, you’re canceling the dialog box and therefore canceling your check box “don’t tell me anymore” which means it’ll continue to tell you. Also, canceling the dialog box does not prevent it from enforcing the rule.

I’ve spent about 4 hours with tech support over the last week trying to undo this. There’s a dialog box buried in the app where you can tell it don’t create new rules. It was already configured to NOT do those rules, yet it does them anyway. This is clearly a bug and Intuit needs to step up to the plate and admit it and fix it. I’ve been reporting this bug since Quicken 2007. I skipped Quicken 2008, so I can neither confirm nor deny that the bug is in Quicken 2008, but I’d assume that it’s there as well.

  • No Sound:  Quicken has several sounds for different events like startup (a short tune), accept transaction (cha-ching), and others.  All of a sudden, Quicken 2009 has stopped playing sounds.  Yes, the play sounds option is indeed checked and yes, sound works in all other programs (this is not my first time messing with a computer, BTW 🙂
  • File corruption:  This is a serious issue.  EVERY TIME I call Quicken support, they claim the file is corrupt.  This seems to be their excuse for all bugs in the software.  They want to dismiss any issue as a bug and claim it’s a corrupt file.  Fine, it’s a corrupt file.  Now, fix Quicken so it STOPS CORRUPTING my files!  This has been going on through at least 2 versions of Quicken (2007, 2009 (I skipped 2008)).  A bug this serious requires a complete rewrite of their file access data layer routines.

Intel DG33FB Motherboard stalls for 2:45 minutes when booting – SOLUTION!

Usually I’ve got programming posts here, but occasionally I’ll rant or post a solution to a problem that’s just computer related and not programming related. This is one such post.

PROBLEM:
I’ve been experiencing a painfully slow boot process even though I’ve got a pretty powerful system at the time of this writing (2008-01-24). I’ve got an Intel DG33FB motherboard with a processor with 4 cores (an Intel Core2 Quad CPU Q6600 running at 2.4Ghz) and 4GB of RAM, plus other, non-boot time relevant hardware added. The problem is the motherboard would stall for 2 minutes and 45 seconds BEFORE attempting to boot from a drive.

To add insult to injury, not only does the motherboard stall for 2 minutes and 45 seconds BEFORE it even attempts to boot from any drive (this means you can’t blame Windows), then after Windows starts booting, it then stalls for ANOTHER 2:45!!! I had presumed that it too was waiting on some sort of hardware response from the motherboard since the timings were awfully suspicions. So, that means I’m waiting for 5:30 for ABSOLUTELY NOTHING! This doesn’t even count the actual Windows boot up time.

So, I did what anyone else would do. I Google’d for a solution. I found many blogs and posts from other users experiencing the same problem. Few found a solution. Several had contacted Intel and Intel didn’t know what the problem was. So, I went to the Intel site and searched for a solution to the problem anyway. I did find that the latest BIOS update (1/5/2009) addressed a “slow bootup” problem, so I downloaded and installed it. No luck. Other blog posts were saying that they fixed the problem by down-grading their BIOS to an older version. For some reason, Intel found and fixed this problem, then intentionally removed the fix from subsequent BIOS updates. I wasn’t real keen on downgrading my BIOS. Sometimes there are bug fixes for some serious stuff… even more serious than a painfully slow boot up.

I decided I’d try ONE LAST thing before I went the route of downgrading my BIOS. I shut down the PC and unplugged EVERYTHING. And by “everything”, I mean all my internal hard drives (3 of’em), my DVD drives, all my USB devices (including keyboard and mouse), my FireWire devices, and my network cable. The only things left were my video cable and audio cables (all analog output, so virtually no chance that they were causing problems). I turned on the computer and low and behold, it only waited about 19 seconds at the same spot it was stalling at before. So, I’ve determined it was an external peripheral somehow causing a problem. So, I plugged all the USB cables back in, turned it on and got the stall again. Narrowed it down to USB devices. So, I unplugged them all, rebooted for a sanity check and it only stalled 19 seconds. I then plugged each USB cable back in one at a time, booting after each one. I narrowed it down to my USB 2.0 hub (or one of the devices connected to it). So, I then left the hub plugged in but unplugged everything from it and rebooted… quick start. OK, now I plugged each USB device back into the hub, 1 at a time and rebooting after each one until the problem returned.

SOLUTION!
Turns out the thing that slowed it down was a USB cable… JUST a USB cable THAT WASN’T PLUGGED INTO ANYTHING!!! Actually, it was before I started the test, but just the cable itself seemed to cause the problem. It was a cable that I normally have plugged into my external USB Seagate 500GB FreeAgent Pro drive. I thought, what the heck, let’s plug it back in and reboot, and the problem was now gone. Strange! So I plugged everything back in, rebooted and the problem is now gone!!! Even the secondary delay during the Windows boot.

So, problem gone, but I’m still a little amiss that the computer’s hardware configuration is now back in the exact same configuration it was when I had the problem. But I’m good to go.

So, if you’re one of the unlucky DG33FB owners experiencing this problem, unplug everything and plug them back in 1 at a time, rebooting between plug ins until you find the culprit.

Hope this helps!

Type ‘System.Web.UI.WebControls.Parameter’ does not have a public property named ‘DbType’.

I recently got this error after deploying a .NET 2.0 web site from my dev machine to a development web server. It ran fine on my machine, but continuously generated the error:

Type ‘System.Web.UI.WebControls.Parameter’ does not have a public property named ‘DbType’.

on the dev server. I validated that both my dev machine and the web server had the exact same version of the .NET framework AND that the web site on the dev server was configured for the 2.0 framework.

My dev machine is running Windows XP Pro 64bit and the dev server is Windows Server 2003 Standard Edition 32bit. The app was developed with Visual Studio 2005.

As a sanity check, I started my VM with Windows Server 2003 32bit with Visual Studio 2005. I checked in the app from my XP 64bit host to VSS, then retrieved it from VSS in my VM. Compiling and running in the VM worked just fine but deploying it to dev still had the error. I created a new typed dataset, dragged the same table to it and copied the queries from the offending datatable in the original typed dataset. I then changed the code to reference the new dataset. Compiled, ran locally, and it worked (no surprise), then deployed to dev, and it worked! Huh? It still had the DBType in the dataset files… which is where the dev server was pointing me to for the error before. This makes no sense. I then checked that code into VSS from my VM and got the latest onto my XP 64bit host. Compiled locally, ran, and it worked, then deployed to dev and it worked. So, I went back to my original typed dataset and pulled the designer up side by side by the new dataset and examined the fields, the queries, and the parameters… all the same! I then deleted some extra datatables in my dataset that weren’t being used (and had referential links to the offending table), excluded the new dataset from the project (rather than deleting it), then changed the code back to use the original dataset. Now everything works everywhere!

So, I’m not entirely sure what the cause was, but I think removing the linked datatables from the dataset helped solve the problem. Of course, this doesn’t explain why the old version worked differently between my dev machine and the dev server.

Anyway, there you have it. If you run into this problem, try deleting links from your datatables and/or recreate a new typed dataset, one piece at a time, compiling, and deploying one step at a time.

Find a GUID (or anything) in ANY table in your database!

[Updated 2015-03-12]

Do you ever run across an ID, such as a GUID or an integer (or anything else, for that matter) while stepping through code and aren’t quite sure which table it’s supposed to be for or need to know where it’s referenced anywhere in your database? I use GUIDs like they’re going out of style for a plethora of reasons, especially in my tables. EVERYTHING in my tables has a GUID to identify it. I use them as primary keys and as foreign keys. They’re computer generated, can be generated either on the db server, the web server, or even on the client and can be guaranteed to be unique in that table, or in an entire database, or in the entire universe, for that matter (try doing that with auto-incrementing integers!!!).

[GARD]

Anyway… I found myself needing to find an arbitrary GUID anywhere in the database and was doing a lot of manual querying. Finally, I decided that this could be EASILY automated by querying the database meta data for tables with GUID columns, then automatically querying those tables and columns for my GUID and output the results of each table and column the GUID was found in.

Below, is my solution: A stored procedure called FindGuidInAnyTable that takes a single parameter (a GUID) and searches the entire database for all occurrences of it. You could adapt this for any column type… not just GUIDs, BTW.

[Update 2015-03-13: Using the newer, friendlier system table names, and including the schema name as well]

ALTER procedure [dbo].[FindGuidInAnyTable]
(
 @FindThisGUID uniqueidentifier
) as

begin

 declare @TableSchema varchar(100)
 declare @TableName varchar(100)
 declare @NextColumn varchar(200)
 declare @Select varchar(1000)

 create table #ResultsTable
 (
 SchemaName varchar(100),
 TableName varchar(100),
 ColumnName varchar(100)
 )

 declare TableNameCursor cursor for
 select
 table_schema,
 table_name
 from
 INFORMATION_SCHEMA.tables
 where
 table_type = 'BASE TABLE'

 OPEN TableNameCursor

 FETCH NEXT FROM TableNameCursor INTO @TableSchema, @TableName

 WHILE (@@FETCH_STATUS <> -1) BEGIN

 IF (@@FETCH_STATUS <> -2) BEGIN

 declare ColumnNameCursor cursor for
 select distinct
 COLUMN_NAME
 from
 INFORMATION_SCHEMA.COLUMNS
 where
 TABLE_SCHEMA = @TableSchema and
 TABLE_NAME = @TableName and
 DATA_TYPE = 'uniqueidentifier'

 OPEN ColumnNameCursor

 FETCH NEXT FROM ColumnNameCursor INTO @NextColumn

 WHILE (@@FETCH_STATUS <> -1) BEGIN
 
 IF (@@FETCH_STATUS <> -2) BEGIN

 set @select = 'insert into #ResultsTable select ''' + @TableSchema + ''' as TableSchema, ''' + @TableName + ''' as TableName, ''' + @NextColumn + ''' as ColumnName from [' + @TableSchema + '].[' + @TableName + '] where [' + @NextColumn + '] = ''' + cast(@FindThisGUID as varchar(50))+ ''''
 print @select
 exec(@select)

 end
 
 FETCH NEXT FROM ColumnNameCursor INTO @NextColumn
 
 END

 CLOSE ColumnNameCursor
 DEALLOCATE ColumnNameCursor

 end

 FETCH NEXT FROM TableNameCursor INTO @TableSchema, @TableName

 END

 CLOSE TableNameCursor
 DEALLOCATE TableNameCursor

 select
 count(*) as Instances,
 SchemaName,
 TableName,
 ColumnName,
 @FindThisGUID as GuidFound
 from
 #ResultsTable
 group by
 SchemaName,
 TableName,
 ColumnName
 order by
 SchemaName,
 TableName,
 ColumnName

end

For MS SQL Server versions prior to SQL Server 2005, do this:

[GARD]

CREATE procedure [dbo].[FindGuidInAnyTable]
(
   @FindThisGUID uniqueidentifier
) as

begin

declare @TableName varchar(100)
declare @NextColumn varchar(200)
declare @Select     varchar(1000)

create table #ResultsTable
(
 TableName varchar(100),
 ColumnName varchar(100)
)

declare TableNameCursor cursor for
  select distinct
     o.name
  from
 syscolumns c,
 sysobjects o
  where
     c.id    = o.id and
     c.xtype = 36   and
     o.xtype = 'U'

OPEN TableNameCursor

FETCH NEXT FROM TableNameCursor INTO @TableName

WHILE (@@FETCH_STATUS &lt;> -1) BEGIN

  IF (@@FETCH_STATUS &lt;> -2) BEGIN

     declare ColumnNameCursor cursor for
        select distinct
           c.name
        from
 syscolumns c,
 sysobjects o
        where
           c.id    = o.id       and
           c.xtype = 36         and
           o.name  = @TableName and
           o.xtype = 'U'

     OPEN ColumnNameCursor

     FETCH NEXT FROM ColumnNameCursor INTO @NextColumn

     WHILE (@@FETCH_STATUS &lt;> -1) BEGIN
 
        IF (@@FETCH_STATUS &lt;> -2) BEGIN

           set @select = 'insert into #ResultsTable select ''' + @TableName + ''' as TableName, ''' + @NextColumn + ''' as ColumnName from [' + @TableName + '] where [' + @NextColumn + '] = ''' + cast(@FindThisGUID as varchar(50))+ ''''
           print @select
           exec(@select)

        end
 
        FETCH NEXT FROM ColumnNameCursor INTO @NextColumn
 
     END

     CLOSE      ColumnNameCursor
 DEALLOCATE ColumnNameCursor

  end

  FETCH NEXT FROM TableNameCursor INTO @TableName

END

CLOSE      TableNameCursor
DEALLOCATE TableNameCursor

select
  count(*) as Instances,
 TableName,
 ColumnName,
  @FindThisGUID as GuidFound
from
  #ResultsTable
group by
 TableName,
 ColumnName
order by
 TableName,
 ColumnName

end

Here’s how you use it:


exec FindGuidInAnyTable '34fdbfa2-cdbd-4f34-bd2e-1423063fb707'

Here are what the results look like in my database:

I put this stored procedure in all of my databases. It’s a real time saver!

Web Service makes duplicate, incompatible types of individual types

I had a tough time coming up with a title for this article. It’s hard to explain, exactly what problem I’m referring to is, and choosing key words to help people find this article who are looking for a solution to their problem will be even more difficult. Let me try to explain the problem:

I’ve got a web service. It has many classes and enumerations in it. Many classes contain members of some of the other classes. When I write an application that consumes the web service, some of those types end up as two seperate types. For example, suppose I have an enumeration like this:


public enum MyEnum
{
one,
two,
three
}

Suppose I have two classes like this:


public class MyFirstClass
{
public MyEnum TypeOfThing;
public string something;
}

public class MySecondClass
{
public MyEnum TypeOfOther;
public bool IsThisPlainEnough;
}

Then, in my consuming application, my proxy web class (which is auto generated by Visual Studio when you add a web reference), I’ll get TWO separate MyEnums, one with the name MyEnum and another with MyEnum1. Then, my two classes might come across like this:


public class MyFirstClass
{
public MyEnum TypeOfThing;
public string something;
}

public class MySecondClass
{
public MyEnum1 TypeOfOther;
public bool IsThisPlainEnough;
}

As you can see, it makes MyFirstClass.TypeOfThing incompatible with MySecondClass.TypeOfOther.

I haven’t researched this enough to determine exactly what’s causing this, but I do know something much more important… a solution!

Here’s what you do:

Add an XmlType attribute to the type that gets duplicated, like this:


[System.Xml.Serialization.XmlType(Namespace="MyNameSpace.MyEnum", TypeName="MyEnum")]
public enum MyEnum
{
one,
two,
three
}

Now, when I add a web reference to my class library, it generates ONLY ONE MyEnum and it makes MyFirstClass.TypeOfThing compatible with MySecondClass.TypeOfOther.

That’s it.