Thursday, December 2, 2010

Object Comparison

Overview


So this is a really quick example of how you can compare two objects to see if they are the same or not using reflection.  This allows you to compare 2 objects (see limitations below) without having to modify them.

Basically what this example does it loop through the properties of two objects and return a boolean if they are the same or not.  If they're not the same, an output list shows what values differ.

Sample Code

The basic code to do the comparison is as follows:

public static bool IsSame(object a, object b, out string differences)
        {
            differences = string.Empty;
            bool isSame = true;

            if (a.GetType().Name != b.GetType().Name)
            {
                isSame = false;
                differences = "Type does not match!";
            }        

            PropertyInfo[] propertyInfos; //array to store object properties
          
            //query the object to get the list of properties contained in this object
            propertyInfos = a.GetType().GetProperties();
          
            // sort properties by name
            Array.Sort(propertyInfos,
                    delegate(PropertyInfo propertyInfo1, PropertyInfo propertyInfo2)
                    { return propertyInfo1.Name.CompareTo(propertyInfo2.Name); });

            //loop through the properties to compare to object b.
            foreach (PropertyInfo propertyInfo in propertyInfos)
            {
                string aValue, bValue;
                aValue = propertyInfo.GetValue(a, null).ToString();
                bValue = propertyInfo.GetValue(b, null).ToString();

                if (aValue == bValue)
                {

                }
                else
                {
                    isSame = false;
                    differences += string.Format("{0}: {1}, {2}" + Environment.NewLine, propertyInfo.Name, aValue, bValue);
                }
            }

            return isSame;
        }

Now, there is a problem with this code... a property is considered a public member that has a "GET".  So in the example below... I get a "true" that the objects are the same but in reality the public "BIO" isn't getting checked because its not a true "property" as it doesn't have a GET... its more of a variable.  I did this intentionally to show some considerations for using reflection to compare an object.

So here is my definition for my "Person" object in the example:


    public sealed class Person
    {
        private string m_FirstName;
        private string m_LastName;
        private int m_Age;

        public string FirstName { get { return m_FirstName; } set { m_FirstName = value; } }
        public string LastName { get { return m_FirstName; } set { m_LastName = value; } }
        public int Age { get { return m_Age; } set { m_Age = value; } }
        public string Bio;

        public Person()
        {

        }
    }

And this is the actual test method:


Person a = new Person();
            Person b = new Person();

            a.FirstName = "John";
            a.LastName = "Doe";
            a.Age = 55;
            a.Bio = "John likes tacos.";

            b.FirstName = "John";
            b.LastName = "Doe";
            b.Age = 55;
            b.Bio = "John likes chips.";

            /*a.FirstName = "John";
            a.LastName = "Doe";
            a.Age = 55;
            a.Bio = "John likes tacos.";

            b.FirstName = "Jane";
            b.LastName = "Doe";
            b.Age = 22;
            b.Bio  = "Jane likes apples.";*/

            bool result;
            string differences;

            result = Compare.IsSame(a, b, out differences);

            MessageBox.Show(differences, string.Format("Are these the same? {0}", result.ToString()));


Notice  I commented out a snippet that you can uncomment and run to show the differences between the objects.

Alternatives


The pro to this comparison is that I can determine what values were different.  The con (as I've shown) is that this needs more fleshing out because public members like "Bio" didn't get pulled in and checked... and it was different.  Also, if I had a property that returns an array... I'm not checking the array and the ToString() would likely return just the type of the array/collection and not compare the individual members.

Some alternatives I can think of off the top of my head... I could convert both objects to binary and if they were the same binary output... the binaries would be the same (or the XML).  But it wouldn't tell me what changed.

I could also write an override function or override the "ToString" to output all the values such that I could compare the ToString results of the two objects.  In that same vein I could implement IComparable to allow the CompareTo function in order to determine if my objects are the same or not.

I guess my point there is that this is one potential way of solving the problem of comparing objects but there are others as well.  Which you use depends on your needs.

Wednesday, November 24, 2010

Should I build a "Windows" or "Web" App?

Hundreds of blogs have been written about the topic but I thought it would be worth re-visiting again.

A lot has changed in the last few years in the web world and has given some parity to the Windows vs Web app debate.  I don't think there is a clear winner but probably scenarios where one might make sense over the other.

Deployment/Bug  Fix process is one area I really think requires some consideration when choosing Windows vs.Web.  If Debbie in Accounting has a problem with a screen that only she uses... in the Windows world I can give her a new executable (via Click Once or XCopy, etc) and she's good to go; nobody else impacted.

In the web world, a production fix could require all the users to get out of the system.  If you're running a state-aware site... dropping in a new DLL will cause the sessions to restart.  (Note this applies to changing the code behinds; you can change the ASPX pages.)  To prevent this, you need a web application that is stateless (or uses cookies or some other method of storing information) so updates will not hose up everyone else.

Also web development tends to take longer than windows development because of the additional requirements such as security and optimizing performance.

Aside from that area, to me anyhow, there is a lot of similarities between the platforms.

Wednesday, November 10, 2010

Object Serialization in .NET

Serialization and its counter-process (Deserialization) is a mechanism to "package" an object for transport to another application or process.

You might want to send a stream of data over a web service, store it in a database, or need some type of interop way to transport an object across system.

The two main methods of serialization in .NET is the Binary and XML.  In a binary serialization, the object is converted to (get this) a binary stream; which is the fastest method of serialization.  Alternately with the XML serialization the stream is plain text which is better for interop.  One key note is that XML doesn't preserve the data type as well as binary but binary is really only useful for .NET to .NET type communication/transport.

Prerequisites to serialization... Serialization is not "free" in the sense that you have to do some work to make your classes/objects serializable.  The easiest way to do this is to mark your class with the attribute as shown here:

    [Serializable]
    public sealed class SomeObject
    {
        public int n1 = 0;
    }

This works because basic data types like int, string, etc .NET already knows how to serialize. However, if I had another custom class it would need to be marked as Seralizable as well.  This can has a cascading effect (all objects in the tree must be serializable [or marked as NonSerialized]) so keep that in mind when looking into this solution.

As I just alluded to if you want something to skip the serialization step then it needs to be marked as [NonSerialized] but keep in mind that object will be null when you Deserialize your object.

A simple example of how to serialize an object:

//Required namespaces
using System.IO; //required for stream
using System.Runtime.Serialization.Formatters.Binary; //required for binary formatter
using System.Runtime.Serialization; //required for Formatter Interface

SomeObject obj = new SomeObject();

IFormatter formatter = new BinaryFormatter();

Stream stream = new FileStream("MyFile.bin",
                                     FileMode.Create,
                                     FileAccess.Write, FileShare.None);

formatter.Serialize(stream, obj);
stream.Close();

As you see above we use a formatter. In this case we used the Binary Formatter but could have used the XML Formatter as well.  The formatter as the name suggests converts the object into the specified serial format and puts it into a stream.

In this case I put the object into a file but I could modify the code and store it in a string or some other container.

As I mentioned earlier now with this object stored in a stream/file/whatever we can now do with it what we please.  For our example, we'll say that this binary goes into a folder for processing by a Windows Service.  In the service it is very simple to pull the object back into memory and start working with it...

FileStream fs = new FileStream("MyFile.bin", FileMode.Open);
BinaryFormatter readFormatter = new BinaryFormatter();
SomeObject serialObj = (SomeObject)formatter.Deserialize(fs);

Just like that we now have our object back and can start working with it.

Limitations...  Some things to keep in mind when doing serialization ... I already mentioned that data that is marked as NonSerialized will be blank/null in the object.  But also keep in mind deserializing an object is not the same as creating a new object.  Therefore, the constructor don't be called.

Also, only public information is serialized.

Custom Serialization... I didn't touch on it but you can inherit the ISerializable interface and invoke your own GetObjectData method to write your own custom serialization for an object.  By default, you don't need to do this unless you have special requirements for the output of the serialization (say your formatting your XML to a schema that another system will consume, etc).

Thursday, November 4, 2010

Alternatives to Try Catch type casting

Back in the early stages of .NET in C# there were limited tools to determine if a cast would work or not.

So typically you'd see something like

int myInt = 0;

try
{
   myInt = (int)TextBox1.Text //some input
}
catch
{
   MessageBox.Show("This is not a valid integer.");
  return;
}

In 2.0 the TryParse came along.  TryParse will return a bool (if the cast worked or not) and put the value of the successful cast into a value.

Example:

int myInt = 0;

if !(int.TryParse(TextBox1.Text, out myInt))
{
  MessageBox.Show("This is not a valid integer.");
}

Visually, this is a little less code but is also a more legitimate way of testing your casting.  Really, you could do a lot more checks using Regular Expressions, etc but something like this is a simple way to do validation in simple scenarios... IE... you just need there to be a number in there not a number with a decimal and x number of digits before and after, etc.

That example was for simple data types... but we can use IS for more complex data types like objects.

The IS keyword basically evaluates if the object to the left of the IS inherits (or is actually) of the same type.  This is useful in places where you want to pass an object as something simpler than it really is to made your function more versatile.

Using the Try Catch paradigm we could do something like this...

public static boolean SaveData(object myObj)
{
     SomeBaseClass b = null;

      try
     {
          b = (SomeBaseClass)myObj;
          return b.SaveData(); //assumes that method returns bool
      }
     catch
     {
         return false;
      }
}

A simpler way is to use the IS keyword...


public static boolean SaveData(object myObj)
{
    SomeBaseClass b;

    if (myObj is SomeBaseClass)
  {
          b = (SomeBaseClasse)myObj;
         return b.SaveData();
   }
  else
  {
   return false;
  }
}

Now in a case like this I'd probably make my SaveData function accept a SomeBaseClass object instead of a regular object and call the save... or make the object implement some type of interface that any object wanting to use this method could call.

However, this is just to give an example of how you can evaluate objects.  Maybe you support some types of objects (or handle them slightly differently) and want to run them all through the same method and this way you can easily do that.

Wednesday, October 27, 2010

Tech Tidbits

Nothing really new to discuss in the world of .NET right now (but I've got some stuff in the works to show).

A new things from around the tech world.

1. iPhone for Verizon.  It looks like it will be happening soon.  AT&T has increased its termination fees and Verizon is going to offer iPads with unlimited data plans as two signs.

2. iPad competition.  HP and Asus among are offering table PCs comparable in features, size, and price of the iPad.  Expect more to come out in the near future.  Interestingly, many of these new tablets are running Windows 7.

3. Windows 8.  It is being reported that Windows 8 is about 2 years aways (give or take).  Windows 7 was just released a year ago.

Friday, October 15, 2010

Technical Interviews

In my career I've done numerous in-person interviews, phone screens, technical evaluations, skill tests etc both from the employer and employee side and it amazes me the number of people who seem to bomb job interviews.  Normally, a person decides if they want to hire you within the first few minutes of an interview.  So you literally have only a couple of minutes to sell yourself to a potential employer.

Here are some common mistakes I've seen in interview and how to avoid them:

1. Being too general.  If an interviewer asks you about your experience the likely answer is to give a high level overview (C#, ASP.NET, SQL) which doesn't tell the much.  Likely since you're being interviewed they already know this information.  Instead, talk about what do you with the technologies you use.

I've heard countless interviews where the interviewee will describe their experience by merely listing out keywords: "C#, SQL, ASP.NET, Web Service, SQL, Oracle..."

A better example, "I primarily build ASP.NET sites using C# and ASMX web service.  On my sites I use the AJAX toolkit to communicate to the server to get combo box values and dynamically load content based on user selections.  I also use JavaScript for client-side validation of user input."

You might also throw in any applicable controls you've used that employer might be looking for.  Example (continuing from the fist example): "In my most recent project at ABC I've been using the RAD controls for data grids that display accounting information that the user can edit which is 2-way data bound to my Entity Framework business objects."

2. Overstating your qualifications.  Generally, questions like "What is your experience with x?" is a setup question.  If you answer "Yes" then you'll probably get 3-4 more follow-up questions.  Be honest about your experience with a technology and what you have done with it.  If you don't have any experience then a good answer is "No but I'd like to learn more about it and work with it."  Additionally, try and take a positive spin if possible and talk about a similar technology or approach you've used in the past.  Example: "I don't have experience with Biz Talk but I have used Windows Workflow before to create processes and tie business processes together."

3. Being an interview killer.  Killing an interview is where you say something (or several things) that will automatically disqualify you to an employer.  Listening is key to getting a job.  Listen to what the interviewers say.  If they have a different lingo for something; use their lingo.  For example if they call XML returned from a web service a "payload" then call it a payload when you talk about it instead of an "XML string" if that's your word for it.  This tells them you are listening and helps show that you can fit in and adapt to the team.

Additionally, don't contradict the position you are interviewing for.  For example, many times I will tell someone the position is a UI or back-end job and they'll later state their preference is the opposite (or something different) of what we're looking for.  If the job is not one you want; don't interview for it.

Last, don't be absolute about how you do things.  For example, "I would never hard code SQL into my application."  This can make for tense moments and give a bad impression if this is a habit or approach your potential employer is taking.  You might say instead, "Normally, I don't hard code SQL into my application I keep it in the stored procedure but there are circumstances I would do things differently."

In review, always talk about what you've don with the technology (not what technologies you know), be honest about your qualifications and turn negative into positives by talking about similar technologies or approaches taken in the past, and don't kill the interview; listen, use their terms, and don't be absolute.

Tuesday, October 12, 2010

XBox 360 Data Migration Cable Warning

I recently purchased a 250gb XBox 360 drive through EBay and ordered a data migration cable for the Xbox 360 through another seller.

Visually, the cable I received looks like the Microsoft cable but it doesn't have Microsoft stamped on the side.  The included software was also not Microsoft and when I inserted the disk the label of the CD was in Chinese!

After scouring the internet I found that the real data migration cable works by inserting the new drive into the XBox 360 slot and attaching the old drive using the cable.  When you power on your XBox it recognizes the old and new drive and offers to migrate your data automatically.  No CD required!  However, since this wasn't a real Microsoft cable... it didn't recognize my old drive when I tried this.

Additionally, the software that was included with my knock-off kit had very poor instructions an didn't recognize the drive when connected to my laptop either... so I couldn't even attempt to copy my data over via that mechanism.

I'm a little skeptical now that the Ebay seller, despite a very high ranking, didn't but a hard drive case and cram a compatible drive into it.  This could violate the XBox TOS (terms of service) and cause me to get banned on XBox Live!  I'm going to throughly investigate the drive before using it!

Thursday, October 7, 2010

Google Chrome - very good with some minor flaws.

I have to say that since using Chrome since April I've been very impressed with it.  For the most part it is visibly faster when browsing the overwhelming majority of sites when compared to IE.

Typically I browse on a Dual 1GHz machine or a Netbook running a 1.7Ghz Atom processor and reserve games for my XBox 360 or heavy duty processing (database and programming) to a dedicated machine.  So in these slower CPU environments you can really tell a difference when a browser is working well vs. a higher end machine brute forcing its way through inefficient coding.

That said, there is one time I find Chrome lacking and switch over to IE which is e-Commerce.  I've noticed on several sites when I attempt to purchase something and SSL is involved that I get session timeouts, errors, or the domain mismatch warning messages.  Even if I click "Proceed Anyway" I have issues.  I switch over to IE and have no problems.  So, I'd say its probably 5% IE (online purchases and some rare cases where Chrome doesn't render a site properly but IE does) and 95% Chrome in terms of my browsing usage.

As an aside some websites to try when comparing Chrome to IE:
Realtor.com
FoxSports Fantasty Football site  (which is horrid, see my random blog entry)

Increasing Computer Performance

I haven't really had any blog worthy tech stuff recently so I thought I'd relay some information about Overclocking in computing for those who might be interested to know more.

Basically with overclocking you are increasing the number of instructions the CPU can process in a given clock cycle by changing some basic BIOS settings.  Typically at the expense of stability (tho this can be nominal) and power consumption you can see major speed improvements over your stock CPU settings.  Conversely, many laptops use underclocking which saves power and wear on the CPU by decreasing the number of clock cycles.

Overclocking has two components the clock cycles on the CPU (called the multiplier typically) and clock rate on the FSB (Front-Side Bus).  The CPU multiplier is the instruction speed internally to the CPU while the FSB is how quickly information is sent to the other components.  Quick note: The FSB is the communication pipline from the CPU to the Northbridge which is what communicates to many of the other Motherboard components like the RAM and installed cards.  Contrastingly, the Southbridge communicates with the "slower" devices like drives and other I/O devices.  For newer chips the FSB is now being called the QPI (Quick Path Interconnect) but its the same concept.

By switching the multiplier settings and clock rate you can increase the performance of your machine if done correctly.  You'd also change the timing on your RAM as well to get an increase in performance since any slow point in the chain is going to cause bottlenecks.  You'll notice many GPUs will have similar settings like the motherboard to overclock.

One thing you'll also need to factor in is cooling and your power supply's ability to handle the increased draw for overclocking.  This is why many overclocked machines use water or other non-air cooling solutions in addition to high wattage power supplies.

There is no real formula on which settings to use to overclock your machine... its a process of trail and error but there are sites that might help you in this area.

One last thing to talk about is unlocking vs overclocking.  Overclocking is increasing hardware performance by increasing power consumption while unlocking is enabling existing hardware functionality that was intentionally disabled by the manufacturer.  When unlocking is applicable is scenarios where a piece of hardware is the same across product models but high-end features are disabled.  An unlocked card would a cheaper end of that line that has been flashed (firmware updated) to allow all the features that are normally reserved for the higher-end card.   The drawback to this method of performance tuning is that some cards are not tested to work at their max settings if they're a lower end card so it might not work OR the manufacturer has modified the hardware in such a way that the features don't work on the lower end card like severing the pathways on the board.

Thursday, September 23, 2010

Learning from the Mistakes of Others

I read an article recently about the now infamous BP oil spill and what went wrong.  Much of the focus was on the mechanism in place, called the Blow Out Preventer or BOP, that failed to seal the oil pipe after the Deep Horizon rig exploded.

The Popular Science article I read paints quite a different picture.  Weeks, days, hours, and minutes leading up to the explosion processes were not followed, bad practices employed, and warning signs avoided.  All these failures made a preventable situation transform into a global disaster.

A lesson to be learned here is that in our IT roles we have the power here and now to prevent catastrophic failures for our organizations by following process, establishing and implementing best practices, and paying attention to signs of a potenial problem.

I'll give a few examples of some situations I've seen, what went wrong, and how it could have been prevented.

1. Virus and Database failure.  An employee receives an email with a virus that propogates through his machine and into other computers on the network.  The virus gets into a SQL Server and causes the database to be corrupted.  On site backups fail, offsite backups brought in fail, and luckily a production copy was moved to a testing database server and used to restore production.

What went wrong and how to prevent it?  The employee whose computer was infected had the virus scan disabled (employee felt it was slowing everything down), Outlook was allow to run scripts automatically (IE, wasn't locked down properly), and when the employee noticed his machine was out of space and a mysterious file was all over his computer... he didn't notify anyone and left his machine connected to the network.  As you can see a series of failures.  Next, the database backups were assumed to be good but the backup process was never tested.  Doing daily backups and shipping backups offsite is a great idea, but only if the backups work.  A restore to a test server should be done periodically to ensure backups are working.  Also, there was no log shipping or other routine in place to capture data between backups.  Luckily, not much data was lost but some surely was because the "restore" data was a one off copy of production data to a test database.

2. A power outage occurs in an office building.  The server room backup power kicks on but the outage is expected to last a while so generator power is started up.  After all the IT staff has gone home for the day but employees are still working to meet a deadline all IT services are lost.  It takes several calls and several hours to resolve.

What happened and how to prevent it?  IT did not have a clear procedure or process for running their server room on battery backup and/or generator power.  Ironically, they had just completed a Disaster Recovery Plan and didn't contemplate this scenario.  An outside consultant handled the server room maintenance and their procedure for using generator power included unplugging the main power line for the servers.  When power was restored and the generator taken offline the servers were running soley on battery power.  Several hours later after everyone left, the battery backups died.   As mentioned, process played a part here.  Someone should have known and ensured the servers were running on the right power source.  No tools or notifications were setup to notify IT when servers started running on batteries and what the status of the batteries were.   Obviously, once all the servers went out there was no way to remotely access them and a further delay was caused trying to locate the right people, figure out the problem, and physically get someone to the building to correct the problem.  Further compounding the issue was that scheduled tasks did not get completed and some processes were stopped midprocess so it took a full day of the programming team's time to assess anything in process that might have failed.

3. This last scenario involves ignoring warning signs.  These events happened at several different places but the results were the same.  In both cases the server room started getting progressively warmer and warmer and nobody bothered to investigate.  The problem culminated in a number of servers shutting down due to overheating.  The IT staff had no plan on how to handle A/C failures and by not investigating didn't realize the dedicated a/c units to those server rooms were not working.  The servers would be restarted only to die a few minutes later again, overheating.  In both cases, someone had to scramble to find some type of large fan to cycle cool air into the server room.  In both cases after the a/c was fixed several machines weeks later experienced drive failures and other quirks likely due to overheating.

What went wrong?  Someone noticed a problem and failed to investigate.  A server room is normally small with a dedicated a/c unit and without that cold air the room can quickly get very hot causing the servers to overheat.  As mentioned in Scenario 2, while this office has a Disaster Plan it didn't include environmental failures like this.  While some servers were turned on others were left off (stopping work for some departments) to keep the heat down.  There was no plan for priority and duration for various departments to keep operating.  Additionally, this business had a "hot" backup site but no mechanism was in place to switch over.

Sunday, September 19, 2010

Uses for Power Line network plugs

Many people complain the top speed of the "Power Line" style network adapters are slow but I've found a few ways and places that these can be useful.

If you don't know the Power Line adapters plug into the wall and use your existing wiring to transmit data.  This works because the current going through the lines is a wave an if you remember back to high school science waves have periods where they are at "0".  During these "0" times, the adapter can send information.

1. I've used one of the newer NetGear Power Lines to access internet content on my Xbox 360.  I normally use it to download patches, updates, and watch NetFlix (I'm normally able to get full picture quality with no lag).  I'm not sure how good this would be for gaming but for my purposes it works great.

2. I keep one always connected to my router and the wall so that I can have a floating hard-wired connection in the house as needed.  Sometimes my household wireless has trouble in some spots and the speed is not good enough for what I'm trying to do (like watching a streaming video on Netflix on my Netbook) so having the ability to plug in a connection anywhere is ideal.

3. Network printing.  I couldn't find a spot in my office for the printer so I put it in a closet and used a Power Line for connectivity to it.

4. WAPs.  Another solution to problem 2 is to use a Power Line plug to provide a Wireless Access Point a connection to the router.  This way you can go wireless but have extended range or test your WAP's range and performance before running CAT5.

5. IP Phones.  Allows a connection for IP phones virtually anywhere in your house or your VoIP router.

Monday, September 13, 2010

ClearCase ... not a fan

So one of my clients is switching to ClearCase for source management.  Not a fan!

With say Visual SourceSafe the install is as simple as running the install, maybe specifying a few simple options, and waiting for the install to run.

With ClearCase I've had literally 30 steps to perform and its still not installed and integrated with Visual Studio.

Some steps:
1. To check the version I had to run a batch file and email it to someone.  Ironically, the utility runs and the output message is "take this text file and email it to the person who sent you this utility."

2. I've had to search the HDD for files so I could update the registry with the correct path.  (Why couldn't the install figure this out and update the registry?)

3. I've had to run additional command line utilities to attempt to register ClearCase with Visual Studio.

Really?  I feel like we've back in the 90s running EMM386 and changing Config.sys files to make enough Virtual and Extended Memory to play a MS DOS game!

People who complain Windows is too complex/buggy need to try installing ClearCase!

Thursday, August 26, 2010

iPhone/iPad Development

So I was looking into doing an iPhone/iPad application.  Turns out, Apple is quite proud of their stuff.

You are looking at $99/yr minimum in order to develop iPhone/iPad applications.  You are looking at $299/yr do to Enterprise applications (IE the ability to deploy proprietary applications in-house).

And... you have to have a machine running OS X to run the IDE.

For someone like me who has never owed a Mac... I'm looking at about $700 (likely close to $1000) get get a new Mac machine running OS X to develop with.

So $800 - $1,1100 minimum to start developing applications for ONE platform.

In contrast, I could buy a computer and a full featured version of .NET for that price and have the capacity to do Windows, Web, Smart Phone, etc development.  OR... download the Express versions for ... FREE.

There's a reason Apple lags behind Microsoft in the business environment... this could be one of them.

Wednesday, July 21, 2010

Stack vs. Queue

I was asked the other day about First In, First Out (FIFO) and First In, Last Out (FILO) data structures.

Sticking to basic data structure examples a Stack and a Queue represent these type of operations.

Queue

Think of a queue as standing in line at the movie theater or anywhere else you would stand in line.  You stand in line until called up (ususally when a ticket window is available to help you, or in computer terms when a queue item is ready to be processed).  The first person in line will be the first person helped and the last person in line will be the last person helped.  So First In, First Out; FIFO.  Conversely, Last In will be Last Out.

Stack

In a "traditional" implementation of a stack things will be First In, Last Out; FILO.  Think of a stack as a stack of papers.  As you stack the paper the first paper in the stack will be at the bottom.  So as you work through the stack of papers, the last item will be first one off the stack.  So First In, Last Out.  Conversely, Last In, First Out.

Of note, putting an item on a stack is called a "push" and removing an item from the stack is called a "pop".

Usage in Applications

A queue is probably used more often than a stack based on my professional experience.  Generally, items on a queue follow an order such that you want to process the first item in before the previous item due to business rules.  For example, imagine if I had a queue for order creation and changes.  An order must exist to be changed so logically I need to process the creation of the order first.  Using a queue, the creation action will always occur before the changes.  Then when a change occured, I could apply the changes to the order.  Then a subsequent change would be inacted upon the order based on the previous change, etc.

A stack is well suited for cases where the newest information is what you'll access first. So a cache or something like that.  I might have a stack to store objects and the last used object will be the last (first out) item of the cache.

Friday, July 16, 2010

Artistic PC Mods

Something I've become interested in recently is optimizing living space.  A basic example, why have a room dedicated to computers, monitors, scanners, faxes, etc?  Instead of buying a monitor why not buy a more expensive TV that has the resolution I need and use that as my monitor?  Especially considering I can't use the computer in one room and watch TV in another room at the same time.

The challenge in this type of a situation is how to hide/integrate your electronics.  The computer will overheat if you shove it in an enclosed space but also is cluttered looking if you shove it next to your entertainment center.

Presently, I'm looking into three different solutions that I wanted to share and I'd be interested to know if anyone does these and how they turn out.

1. Functional Computer Wall Art:  Ditch the case and mount your computer components to a piece of colored peg board or plywood.  Layout the components so its visually interesting (artistic) but also so everything is connected and functional.  Maybe add some lighting or other dramatic effect to have a functional one of a kind piece of art.  With something like this you could mount it above your TV or on a wall and cleverly run cables in a neat and orderly manner.

2. Computer Table: Build your computer into a coffee table or something such that you can use the table as a table but when needed it holds your keyboard, mouse, and computer components required to run.  If you have a flat panel you could mount it horizontally under a transparent acrylic table top.  If done, I would try and make the components (like the wall art) look interesting.

3. Submerged Computer:  I've been thinking about but its a little expensive to setup.  Basically you take a small fish tank (about 5-6 gallons) and mount your computer components inside.  Instead of water you use mineral oil, which is non-conductive.  The mineral oil serves to draw heat from the computer components and well as give an appearance, well, an aquarium.

Most plans you'll see for this on the internet (and I agree with) state that you should only submerge the motherboard and its accessories.  Hard drives and other optical drives should be mounted behind or in the light housing on top of the tank.  Both types of drives rely on high RPM rotation to work properly and the mineral oil would cause them to spin too slowly.  While a hard drive should be sealed well enough to be submerged its not recommended.  A solid state drive, however, will not have any problems.

One thing you'll notice in such a setup is that the fans will spin slowly due to the friction of the mineral oil.  No worries as the oil is what is absorbing the heat now so the fans really offer more of a visual appeal than anything.  You could also use a pump an add bubbles to the aquarium too.  

I've been contemplating a design where a smaller tank is submerged in the bigger tank but with water so I could put a fish or two in it.  That way it woul really look like a real aquarium despite being two separate things.  My concern, however, is if water starts leaking and/or spills into the other tank.  Zap!

Final word of warning on these mineral oil tanks as well... the mineral oil has a tendency to wick so over time you'd start finding oil dripping from your mouse,keyboard,etc.  The best solution is to make sure the ports are out of the oil solution and using a wireless setup.

Saturday, July 10, 2010

Gazelle

I wanted to take a quick opportunity and mention a cool website for those looking for a no hassle way to sell your DVDs and small electronic devices with no hassle.  The website is Gazelle.com.

The process is simple... you browse their site and find items that you have that they are willing to buy.  It will ask you for the condition of the item and what accessories you have.  Based on this, you get a guaranteed price for the item.  If you have enough stuff then Gazelle ships you a box and postage paid label.  Otherwise, you get the postage paid label but have to provide your own box.

Once the condition of your items is verified Gazelle will pay you via PayPal, Check by mail, etc.  If there is a dispute between the condition you listed and what Gazelle believes the condition is then you can either accept the revised offer or have them ship you the item back for free.

Last, if an item no longer has value they will take it and either sell it to a reuse shop or recycle it.

The key is not to wait as if you have items sitting around they progressively lose value.

Some examples from this week of items they're collecting and prices paid for items in good condition:
Iron Man for XBox 360: $8
HTC Droid Eris : $128? I can't remember exactly.
Nintendo Wii  w/ controller: $78

But an old Blackberry like the 8700 series is a recycle item.

It is worth noting that you could get better prices from Ebay or god forbid Craig's List but you have the time, selling and shipping costs to contend with.  If you're going for max price then Gazelle may not be the answer but if you're going for sold now then it might be a better bet.

Wednesday, July 7, 2010

Arduino and LaunchPad

There are two great products out right now to help you begin tinkering with running your software in conjunction with hardware devices.  And... its never been easier!

The first is the Arduino boards.  The newer boards offer a USB interface to upload your application and communicate back and forth with the device.  There are a variety of sensors, GPS, wireless, ethernet, and other boards/cards/chips such that you can design exactly what you need.

Out of the Box ready Arduino boards run from $20-$40.  The sensors and LEDs run less than $2.00 but the networking/communication products run $40+.

Secondly, is TI's Launchpad kit.  The kit contains a couple of their value line chips and an evaluation board with some built-in LEDs and buttons.  Like the Arduino, the board uses USB.  Once programmed, these chips can be moved to a permanent board with whatever accessories you choose.  This kit is a mere $4.30.  Yes four dollars and thirty cents.  However, the kit is currently on backorder through August.

Both platforms also have free software and the languages are a flavor of C++ with much of the complexity hidden behind pre-written modules.

Monday, June 14, 2010

eBay Today

I've been off eBay for a while and started using it again recently.  It seems like each year that site gets crappier and crappier.

For example, I've had several auctions where the price starts at $0.99.  I set my max bid to say $40, which is a fair price for what I'm bidding on.  The auction will have 3-4 days left on it.  So I'm the highest bidder at $5 or whatever.  Then suddenly the auction is cancelled with no reason given!

I know some people used to list an item on Craigslist or something but they'd typically have a Buy It Now price and at least tell you that they were trying to sell it outside of eBay.

The only other option I can think of is the seller decided not to sell the item or worried they weren't going to get their price.  But due to bid sniping, you won't know until the final minutes what your final price is going to be near.

Now you bid with the possibility that whatever you bid on can be pulled at any time.  If anything, this is making me look for sellers who run "Sell it on eBay" stores or are "professional" eBay auctioneers.  Those with low ratings, etc are losing my business because I can't take their auction seriously!

Wednesday, June 2, 2010

Windows Home Server

I've been using Windows Home Server for about six months now and feel comfortable giving it a review now.

Overview


Earlier this year I realized my tech at home was getting out of control.  I had a personal desktop, work desktop, laptop, digital camera, digital camcorder, my wife's laptop, XBox 360, camera phones, etc.

I frequently found myself looking across multiple devices to find documents, photos, baby videos, etc.  I also found that I had a lot of physical media (Music CDs, DVD movies) that I was storing in cabinets when I could save them to a centralized location and store the physical media in the attic or storage closet.

Originally, my plan was to build a full blown server and setup a domain, shared folders, etc.  But, why implement an Enterprise solution for a relatively trivial task.

The solution I decided upon was Windows Home Server.

Getting to WHS


I'd looked at some other free products but didn't find anything I was really enamored with.  There were plenty of applications that would allow me to share my content with the XBox and handle MP3s/MPGs.  But, these tools didn't handle documents very well.  I also wanted some directory security to avoid someone accidentally moving or deleting source code for clients I support while trying to copy some photos.  I also wanted to be able to make some content public for friends when they visited and hooked up to the WiFi for example, free software I downloaded and use, but not my taxes and bank statements.

I could have gone with a mixed solution, Active Directory for file management, and another program for media syndication.  However, I wanted a product that really integrated seamlessly across all boundaries.

Ultimately, I found WHS would be my best option.

Using WHS


Installing WHS is like any other Windows install you've ever done so there were no surprises there.  Well, unless your host machine didn't support modern power management functionality.  I ended up having to change out the motherboard on my host machine (a rack mount server) to a compliant board.

Once installed, you really never need to go back into the WHS machine.  Instead, you install a "connector" application on the machines you want to use WHS with.  I don't recall needing to do anything special for the XBox it just sees the server and can access whatever publicly accessible content is available; pictures, movies, music.

Once installed, the connector shows up as an icon in your system tray.  When the server is running and your PC is "healthy" the icon is green.  Otherwise it will change colors depending on the situation.

Setting up users and file permissions are very easy.  You simple create them using the WHS GUI and assign users rights (none, read, full).  It basically handles the active directory type functions for you.

You also get a "Shared Folders" desktop icon to access the Shares you have access to.

There is no magic here, just a simple and easy to use interface for sharing data on your network.

Misc Notes


WHS also automates backups of your machines which is handy if you're looking for a centralized backup solution.

Another cool feature is WHS handles the allocation of the disks for you.  Once you mount a drive your disk WHS will automatically handle it.  Instead of having physical disks, you have a big virtual disk that expands and is managed by WHS automatically.

Next Steps


For me, my next step is to figure out how to store DVR data on my server.  My DVR fills us quickly and I end up having to delete programs I wanted to keep!

Wednesday, May 19, 2010

Application Security Overview

Overview

I rarely find an organization who has someone whose sole job is to ensure the security of data and applications across an Enterprise.  Therefore, a lot of that job falls on the shoulders of developers.  I think security is a rather broad topic with a lot of considerations but for the purposes of this blog... I'll stick to some basic scenarios.

Really, there are two major scenarios to consider in regards to security;  the data and transport of the data.

Transport Security

The most basic and reasonable method of securing your data from point A to point B is to use a mechanism like SSL.  Normally in HTTP, data is sent in a clear text format.  Using SSL, both client and server agree on a key to use and the data is sent in an encrypted form across the wire where it is decrypted on the other end.  As you would imagine, this process does increase the size of your payload being sent and returned but prevents someone who intercepts the packets from viewing the contents without first breaking the encryption.

The performance bottleneck is one of the reasons you see merchant sites unsecured until you switch over to process a payment, at which time SSL is employed.

Data Security

When securing your data, you have options but need to choose carefully.  For example, if you encrypt data in the database that field is no longer searchable by the database engine.  Additionally, as with SSL there is the performance hit of decrypting.  Lastly, you have to consider reporting and other functionality that may have no/limited abilities to decrypt data.

Presently, Rijndael is a popular encryption algorithm and works pretty well.  You will have to specify two key values (key and IV) in order to encrypt and decrypt data.  However, both the source and destination machine must have the same keys otherwise this won't work.  A practical application of this might be to encrypt data going between two machines across the internet via a web service.  (Tho again, SSL can also be used... so Rijndael would be an added layer of protection if that was the case.)  This might also be useful when passing a token to a non-secured web service to authenticate.

On a machine, I like to use DPAPI.  This came integrated in Framework 2.0 (IE, there are objects for it you can use) and encrypts data with a key specific to the machine.  This is great for securing connection string and other components of your app or web config file.  I'm not going to go into detail but basically you can use the ProtectedData class (part of System.Security.Cryptography) with LocalMachine protection scope.

Alternately, you could do a DLLImport and use the encryption DLLs directly but this approach is frowned upon since it creates unmanaged code.

Recap

Use SSL to protect data in transit from server to client; HTTP is clear/plain text by default.
Use Rijndael to encrypt/decrypt sensitive data; remember you can't search encrypted database fields.
Use  DPAPI to encrypt data with a machine specific algorithm.

Delegates

I thought I posted this a while back but it doesn't appear on my list of blogs so I'm reposting it.

A delegate is what used to be called a function pointer in C++. What it basically allows you to do is define the method signature of a function that is going to be called at run time without knowing what the function is at compile time.


Where this is useful is in cases where you have object(s) that need to register to be notified when something happens so it can take the proper action. This may sound a lot like an event; that's because an event is basically a variant of a delegate.

Two special types of delegates are the anonymous and multi-cast.

The anonymous is where you delcare a delegate but you define the function to be called inside the delegate declaration. Example: button1.Click += delegate(System.Object o, System.EventArgs e)
{ System.Windows.Forms.MessageBox.Show("Click!"); };

So we're targeting a method that accepts object, EventArgs but we're actually defining the implemenation inside the declaration. So this delegate, when fired, would display Click! in a message box.
In the multi-cast delegate, you can target multiple method(s). But I don't recall ever using this feature so I don't have an example of it.

The most common place I use delegates is when I'm trying to filter a List of items. I normally use an anonymous delegate.

So assume I have a class, Employee with a FirstName field... and I've stored all my Employee records inside a List called MyList.

Example: List MyListOfBobs = MyList.FindAll(delegate(Employee e){return e.FirstName == "Bob";});

So here I would get back all the employees in MyList where the FirstName was equal to Bob.

I could also do something like this... MyList.FindAll(FindAllBobsMethod);

private static bool FindAllBobsMethod(Employee e)
{
if (e.FirstName == "Bob")
return true;
else
return false;
}

Tuesday, May 18, 2010

Partial Classes

The concept of the partial class came into play in Framework 2.0.  However, I've noticed that people have either forgotten or didn't notice this handy keyword.

Framework 1.0

If you used the 1.0 Frameworks you noticed when you created UI elements that all the back-end code for those controls ended up going into the form's code behind.  Typically, this resided in a region called something like "Designer Generated Code".

To me at least, this cluttered your code behind with code that was required but not it wasn't YOUR code.

Framwork 2.0

The solution to this and many other problems came in the 2.0 release of the Framework with this new keyword; partial. 

A partial class essentially is a keyword that tells the compiler "hey, my class is defined across several definitions; usually across multiple files".  It almost reminds of me header files in C++, but that's neither here nor there.

You can use partial on a class, struct, or interface.

So what?

Well, the two reasons MSDN's documentation suggests using the partial keyword are the same two reasons I like it.

1. It keeps system generated code in its own section away from your code.
2. (This is the big one...) Allows teams of developers to break a class into several logical files to reduce/prevent waiting on your code to be checked in to edit a section.

Like I said, reason 2 is my favorite.  Countless times I've needed to add an item to an enum, add an overload of a function to a class... and its checked out.  I would even say in about 90% of the cases, the other developer was changing things unrelated to what I wanted to change!

My rule of thumb is to break the code into logical chunks when using a partial class.  For example, if I had an ATM object then I might break each class into the functions the ATM can perform.  Obviously, the class name will be the same but I like to use a classname_function type name for all the partial files.

Example:

ATM.cs (maybe where I keep the constructor, declarations, and properties)
ATM_Withdraw.cs
ATM_Deposit.cs
ATM_GetStamps.cs
ATM_PrintStatement.cs

Usage

Using the partial keyword is easy!  Simply put "partial" after the scope (public, private, protected,etc) and before the type (class, interface, enum).

Example:

public partial class Form1 : Form

Note:  This would be the main declaration of the class.  A subsequent declaration wouldn't need the inherits.  So your other partials would like like this:  public partial class Form1

Other Considerations

Keep in mind that the entire class definition has to be in the SAME namespace.

All parts must have the same scope (accessibility).  IE, its public or private, etc not mix and match.

Monday, May 17, 2010

Droid 2.1 for HTC Droid Eris

The Verizon update for the HTC Droid Eris has dropped in the last few days.  Users will be prompted to install a system update.  This will bring your phone to Droid 2.1 and give it many of the features that the Motorola Droid users have been enjoying the last few months.

There are 4 features that I am excited about:

-New support for voice-to-text entry. Whenever a text-entry box appears, simply tap the microphone icon on the virtual keyboard and speak.

-Google Maps with Navigation provides free, traffic-enhanced,turn-by-turn navigation.

-Longer battery life due to power savings.

-Faster power-up time

NOTE:  You will need 25mb of free internal memory to install this update.  The easiest way to do this is go is to go tnto the System Settings -> Manage Applications and go to the "Browser" application.  There will be an option on the page labelled "Clear Cache".  If you use your browser much then this folder will probably be over 25MB and now you have the space you need to install.

On my install, I lost all my contacts.  Luckily, I do my contacts through Gmail.  All I had to do was force a sync of my contacts with Gmail and I was back in business.  One quirk was that a few contacts (Husbands / Wives who share a Facebook account) were merged into a single contact.

Friday, May 14, 2010

How To Write Clean Multi-Step Code

Scenario

You have a process with some number of steps and each step must complete successfully before the next step can continue.  A scenario I've seen before in peer reviews and production code is a nested set of IF statements to determine if the next step should be processed, etc.
I've provided a simple (yet verbose example) of what this code might look like below.



Notice that we're checking each call to see if the previous call worked.  As you also may see we are duplicating essentially the same code over and over again.

A cleaner way to do this is to implement a while loop and standardize the code.  The loop will call each item in sequence and check if it should continue processing for each step.  As you can see this is a much cleaner way to implement a multi-step piece of code.

Debugging SQL Objects

If you are using Visual Studio and open up a SQL object (proc, function, etc) you have the option to right click and set a breakpoint much like you can with C# or VB.NET, etc.

If you are a purist (I guess I would call myself one) then you want to do all your SQL stuff in SQL.

A great way to debug objects in the database is to use a statement to output the various aspects of the execution of your objects to the output window in your database tool.

In MS SQL, the command you would want to use is PRINT.  In the example below, I have a procedure that validates some number.  And my business requirement is that this number cannot be less than 0.



As you can see here, its very simple to put these print statements in code.  This is a very simple example but you can imagine having a complicated set of code where you needed to be able to run it and get meaninful information regarding the execution, this would be a great way to do that.  You could throw exceptions as well but that's another topic for another day.

This last screenshot is what the output looks like when I tested my object.


You can do the same thing in Oracle, but the command you would want to use is DBMS_OUTPUT.PUT_LINE.

Tuesday, May 11, 2010

Using IIS Locally

This is really a quick note more than anything.  I've had peers many times ask me to lend my eyes to a problem they've had when testing some web code.  Typically, the code works on the server but not on the local machine.

Without fail, 99% of the time the problem is the developer is running the ASP.NET Development Server (Cassini) on the local machine rather than running the application through IIS.

The built-in web server for ASP.NET is ok for simple things; like does a page display correctly, but lousy for testing full features of your application.

Namely, here a few things that don't work using the ASP.NET web server.  Keep in mind, I haven't tried these things in VS2010 so I'm not sure if its still an issue or not...

1. doesn't except any connections from remote machines (localhost only, so you can't have peers look at your work)
2. Server.MapPath doesn't work
3. Some File IO operations don't wory
4. IIS security type things don't work
5.Can't run non-ASP.NET pages (IE, no classic ASP and other types of scripting)
6. SSL may not work (haven't tried it)

Monday, May 10, 2010

The goto Statement

The goto statement has it roots back in Basic prior to its arrival in .NET.  The value of the goto statement is the ability to jump around in your code.  Where I find these to be most useful is in cases where you may need to re-run a portion of your code.  Or, you don't want to or can't use a recursive call.


In the example above, I'm calling an important but unpredictable web service.  Let's say that Internet connectivity is spotty and I can't always connect to my web service.

Notice above the code to connect to my service I have put a label (ReAttempt:).  You can use any name you want for your label (aside from a keyword) followed by a colon.

Now, I call my service and attempt to do something.  I've setup my example to return response object with a Failed Boolean property and an error message.  So, my request fails and I determine its due to not being able to connect to the remote server (I parsed the error message for the purposes of the demo and know that's what the message means).  I dispose of my existing web service and tell the program to "goto ReAttempt".  My code will now start back declaring the web service.  Since I increment the maxAttemptCount variable, my next attempt will be attempt 2.  (Unlike say a recursive function, I'm in the same instance of my method and so my variables don't reset, etc.)

So if by the 3rd attempt I can't connect to my service, then I'll throw an exception for my application to handle.

As you can see, I've quickly allowed my application to be fault tolerant and re-try some type of request to a resource without having to write very much code.

Doing Calculations in .NET

In the following example, I'm trying to do a basic mortgage payment calculation.  Assume that instead of hard coded values, I've collected these values from user input and validated them accordingly.

I know from doing application development for a bank before my number will be less than 1% of the loan amount and more than double the loan amount divided by the term (in months).

That means my range would be between about $1,100 (400,000 / 360) and $2,000 (200,000 x 1%).  That is a broad range but I should be somewhere in there.

But in fact, when I run my formula, I get 0.  My code is shown below.



What I like about this code is I showed what the math formula I was solving is.  That is about all I like about my code here.

The issues I have with this code:
1. Magic variables.  What do P, I, L, J, N, and M mean?  Since I have hard coded values here, it makes it easier to guess but its not very intuitive.

2. Since my formula is all bundled into one line, I can't check the intermediate steps to see where a potential problem is.

3. Some of these tasks might be useful in other areas but the code is not reusable.

4.  All my values are doubles because the Pow (Power) function requires a double as an input.  If I didn't do this, I'd end up having to cast all my variables.  Personally, I like to stick with the data type that is most reasonable for the data being contained, so I want to fix this.

Problem:  The actual problem in the code here is that I should be taking 1 - (1 + J)^N.  Instead, I'm doing (1 - (1+J))^N.  I get a number so small that it is effectively 0.

So, I've determined this code is hard to understand, test/maintain, and since I might not be the one supporting the code, I need to clean it up.

Here is an example of taking a formula and creating clean, reusable, and understandable code.

I cleaned this up by creating a top level function that wraps all the calculations (CalculateMonthlyPayment) but within that I actually break down each step of the formula so I could break point during debugging and see what the actual value(s) are.

Notice I also put the formula in the function that is calling calculate payment method.  This is so if another developer works on this, they know (without digging) what this code is doing.

Also notice that I'm able to use decimals instead of doubles now.  Tho, I do have to convert back to a decimal from a double in the CalculatePaymentRate function.

When I ran this, I found my bug and was able to fix the problem.


In summation, don't try to write calculations in one line they're hard to debug and easy to make mistakes.  Break your formula down into steps and solve each piece clearly so you can debug it.

Friday, May 7, 2010

Enums

I received an email from a former co-worker asking about enums and I thought this would be a good quick entry for today.

The question was, what is the difference between the following two enum declarations:
a.  public enum Genders { Male, Female };
b.  public enum Genders : int { Male, Female };

The answer, in this case, is there is no difference!  The reason is by default an enumeration's underlying data type is an int.

However, the underlying type can be changed to any integral type except a char. So byte, long, etc...

But, you'll have to cast your enum value to that type to get the data out

Ex. (this is from Microsoft's MSDN site)

enum Range : long { Max = 2147483648L, Min = 255L };

long x = (long)Range.Max;


Enums are also 0-based indexes by default. So the first position (Male) will be "0" in both of these cases.

You can change the starting index number by setting the value of the first item in the enum:

public enum Genders { Male = 1, Female};

Now, Male will be 1 and Female will be 2.  And as you guessed if you had a NotSpecified then it would be 3.

Lastly, you can set your own enum values as well.

public enum FoodRanking { Mexican = 1, Brazilian = 6, Italian = 2, American = 3, German = 4, French =5};

**Notice, your values don't have to be in order.

Wednesday, May 5, 2010

Microsoft Kin

In trying to keep my blog relevant, I wanted to mention that tomorrow Microsoft is releasing its entry into the social networking world with the release of the Kin and Kin Two phones.  The phones will retail through Verizon for $50 and $100 respectively.

These are not "smart phones" but rather social phones giving access to Facebook, Twitter, and Microsoft's social site "The Studio".  The Studio will feature a timeline approach where users can view their friends activities and posts on a day-by-day basis.  Since these are not a smart phone, there is no App Store and relatively few software packages beyond the things I've already mentioned.

Both models feature a physical keyboard with Kin having a Blackberry style look and the Kin Two having a slide out Motorola Droid look.  The Kin Two has double the memory at 8GB and features an HD capable camera where the base model has a regular def camera.

You can see these devices at http://www.kin.com/ and as I said they'll be released tomorrow.  While geared at teens, this product might also be good for people who want to check social media on the go but don't need the featuers of a full blown smart phone at nearly double the price.

On a minor footnote, HTC has also released the Incredible which is their more direct competitor to the Motorola Droid.  No slide out keyboard but it features a 3.7" screen, 1GHz processor, 8GB memory, and Android 2.1 OS.

Tuesday, May 4, 2010

Understanding Your Data Via SQL Objects

Prior to ASP.NET Dynamic Data and Entity Framework, I began development of a similar solution realizing I spent a lot of time writing the Database I/O logic and admin screens rather than focusing on business rules and the "meat" of the application UI.

My solution included a tool that used SQL tables and stored procedures to create a fully operational class, much like CodeSmith and other code generation tools.  But because I like to build my own solutions I still use my CodeGen tool and am in the process up updating it to use templates to allow for a more flexible implementation of my objects in future.  Eventually, I may provide this tool for free and catapult myself into Internet fame and Theoretical Dollar fortune but as of now, I don't have the new Code Gen tool in a usable state.

So, how am I doing this?  Well, its information that is already readily available in SQL.

In your SQL database you have a series of objects the define the tables,views, procs, etc that you've created.

SYSOBJECTS

The first of these tables is the SYSOBJECTS table.  This contains a top level list of basically everything you've ever created in your database.

To see what objects are in your database is as simple as Select * FROM SYSObjects

This will return all the objects in your database.  However, there is some confusing and potentially useless data so I'll walk through what is what.  ID and Name will be the two most useful items.  ID will allow you to find the columns in these objects and the Name is the actual name you specified of your object.

You'll notice that primary keys, foreign keys, etc also appear in here.  Any object that is a child of an object (such as the keys) will have a "parent_obj" ID.

The XTYPE is the indicator for what the object is.  So a user table is 'U', system (SQL) stuff is 'S', Primary Keys: PK, Foreign Keys: F, Views: V, and Procs: P.  There are a few others as well.

For the purposes of this example, to find all the user tables you'd look for an XType of 'U' and a status > 0. Without the status check, you can end up with some tables that are not yours.  I'm not sure why this is (this column is undocumented in SQL), but I found that this fixes the problem as user tables have a positive status value.

So, at this point we know we have tables.  Now what?  Now, we need to inspect each table's design.

SYSCOLUMNS

Once we have ID of the object we want to inspect, we can find out what its contents are from the SYSCOLUMNS table.

SELECT * FROM SYSCOLUMNS WHERE ID = 12345 where 12345 would be your table's object ID.

This will return a lot of stuff but only a few fields, again, are important.  Name will be the name of the column, xtype will be the data type of that column (more on that later), length is the length of the field in terms of its dataype (IE, a string type datatype's max number of characters or the number of bytes for an integer), scale and precision is there, order of the column in the table, and if the field allows null.  There is additional info but for the sake of brevity I won't discuss them.

As you can see, you can tell a lot about your data via these objects.  You do have to do some mapping from the SQL types to the .NET types, but that isn't difficult.

Speaking of types, you get that information from SYSTYPES.  Simply join the xtype of SYSCOLUMNS to the xtype of SYSTYPES.  SYSTYPES has some default information about the data type but mainly, name is what you are concerned about.  I only look for types of a status of (0,2) because otherwise you get back some funky stuff (one day I'll figure out why this is).

Misc

I should mention that for tables and views you will get back all the columns displayed.  As of yet, I haven't figured out how to relate fields of a view back to its source table.

For procs you will only get the parameters from the syscolumns table.  If you're trying to write an object around a proc you have to prompt the user in the Code Gen tool for some values so you'll get a result set back.  You can then get the column names and associated data types of the fields returned to build your class definition.

There are other tables for permissions, users, keys, indexes, etc that you might want to explore.  I would be careful of trying to modify these tables as it may have undesirable results.

Putting It Together

I didn't include any code here because it would be quite length and I encourage you to play with your own implementations.

Some things you can do with this data:
1. Build an object using the table name and create correctly typed declarations/properties for each field.  As mentioned earlier, you will have to map data types in SQL to a .NET type IE bits are Boolean and most character fields are strings, etc.

2. Using the syscolumns you could enforce some basic validation is null allowed, max length, etc.

3. Identify primary keys in your classes, etc.

4. Create CRUD operations select, insert, update, and delete.

Monday, May 3, 2010

Content Management Systems for .NET

I'm frustrated with the Content Management System (CMS) offerings for ASP.NET/SQL.

I tried to install 3; DotNetNuke, AxCMS, and Umbraco.  All three failed the initial installation.

DotNetNuke would have been the easiest if had worked.  It used the Web Platform installer.  The annoying thing with the installer is it requires a reboot which means you have to do this after hours on a production web server.  I was able to get through to the screen where I setup my IIS information for the site and the SQL database screen.  On the SQL screen, I picked my existing SQL Enterprise DB instead of using SQL Express.  During the DB installation part, it failed due to some error with the DNN database script not having something declared properly.  This caused the entire installation to fail and when I went to clean-up, nothing was written to the DB, updated in IIS, and only the files remained in my predefined folder location.
AxCMS had a command line batch install and it failed on most of the steps of the install.  Allegedly, there were configuration files you could tweak per the install prompts but the directories it indicated didn't exist.

Umbraco I extracted to a folder and then created as a website.  It fails trying to register AJAX components but when I check the bin folder I found all the DLLs and versions that the web.config specified.

There needs to be a CMS solution that is easy to install and use!?  I'm sure there is one out there but many of the sites for these products are lacking.  The feature list is vague, there is no demo site, and very few screen shots to see what the product looks like.

Building a PC -- considerations

A technology but not programming related item... building a PC.  I'm not going to talk through the steps but I had some tips and tricks that should make building and troubleshooting your next build project faster and easier.

1. Start with the motherboard first.  To me, picking the motherboard is one of the most important decisions.  Many people think having a big fast CPU is the key to performance but the motherboard is equally a factor.  (As an aside... engine building is the same way.  People focus on cams, pistons, intakes, carbs, etc... but cylinder heads are a big determining factor of performance.)

Brand wise, I like (in no order) the DFI LanParty series (if they still make that series), MSI, and Asus.  I currently run an ASRock board in my desktop but I did that because it supported two different types of RAM so I didn't have to upgrade my RAM right away.  Its been a reliable product but I don't know enough about them to endorse them. 

Expect to spend $100-$200 on a motherboard.  I tend towards $200.

I like a site like NewEgg because it allows for filtering and has reviews to help you find a good product.  I'll take it a step further and use Bing or Google to find technical reviews on the several MoBo that I'm considering.

Some boards support an older and newer memory type (like my ASRock).  You can save some money here on your build by using existing memory now and upgrading later if you're on a tight budget.

When building your machine, power up the system after the motherboard is installed.  If nothing happens, the board is probably bad and needs to be replaced.  You should get some type of POST screen and possibly a message that the processor is missing.

I've found that motherboards have a high DOA rate... in my experience in the 30-40% range.  So there is no advantage to installing everything before test firing the machine.

2. CPU.  CPUs also have a fairly high DOA rate, probably 20%.  After testing the MoBo be sure test with the CPU installed (make sure the fan is installed! See below...).  You should get an error at this point that no drive/OS can be found and all indications are your build will be successful.

Be sure to use thermal grease between the top of the CPU and the heat sink/fan combo.  It only takes seconds for the CPU to reach the 130-140 degree mark!  Not having enough (or having too much) will cause the CPU to overheat and fail.  Most BIOS systems will have an area to view the CPU temp.

3. Drive performance.  Always get at least a 7200 RPM drive... the 5400 RPMs are too slow and are typically used on laptops to save power.

4. If you want to play games, watch movies, etc... you'll need a graphics cards as the integrated card probably won't cut it.  You should be able to find something workable in the $100 range and want it to have at least 1GB of its own memory onboard.

5. Intel vs AMD?  If cost is an issue, go with AMD.  I jumped on the AMD band wagon back in the mid-2000s but I had several problematic builds.  One machine the chip blew up almost immediately after powering on the machine.  On another build, the machine would randomly shutdown.  I also noticed strange errors periodically when closing a program.  Things may have changed but it seems like the Intel platform is a little more stable.

Sunday, May 2, 2010

HTC Droid - 5 Months Later

I've had my HTC Droid for about 5 months now and have really have a good understanding of the device now and what I like and don't like about it.

For $99, I would have to say this is one of the better "smart" phones on the market.

PROS
- Gmail integration is top notch.  My "People" (contacts) syncs flawlessly with Gmail.  I prefer to edit contacts in Gmail on a PC and changes are automatically propogated.

- Facebook integration.  I'm losing interest in Facebook but the phone allows you to map Facebook profiles to contacts which means I automatically get a profile picture when someone calls without having to do anything plus I get birthday information and can quickly see any updates people have posted.  This really helps me keep in touch with what is going on with my friends, etc.

-Camera.  The camera takes great pictures (I think its a 5 MP camera).  I rarely take my 10MP Canon Rebel anywhere unless I need/want really really good photos.  The built-in album allows you to quickly and easily share pictures and video via email, Facebook, or YouTube.

- Apps.  The Market apps are comparable to many of the iPhone apps available.  Some I downloaded and haven't used (like the GPS breadcrumb app) but things like Shazam (identifies studio recorded songs by listening to the song though the microphone), IHeartRadio (Clear Channel stations), and Pandora (create a station of your favorite bands) are great.

-Phone.  The call clarity is very good on Verizon.  Phone functionality is easy to use like a phone should be.

CONS
- Memory Management.  I don't like that apps run in the background, but running Advanced Task Killer takes care of that and saves battery life.  There might also be a memory leak because after a few days the phone seems to be sluggish.

- Keyboard.  Its hard to type unless you turn the phone sideways, which requires 2 hands instead of 1.  Sometimes the phone is slow to switch the orientation as well; which sucks.

- Updates.  I really expected to see a system update already.  Specifically, I wanted to get some of the Android 2.0 OS functionality.

Saturday, May 1, 2010

Steve Jobs on Adobe Flash

This post pretains to the open letter from Steve Jobs in late April regarding Flash operating on iPod, iPhone, and the iPad.

See the letter at http://www.apple.com/hotnews/thoughts-on-flash/

It never ceases to amaze me how full of himself and his products Steve Jobs is.  In the Mac vs. PC battle for market share, Mac is 2nd place in a 2 man race... by a mile.  PC sales drawf Mac sales.  People want options and the PC platform offers them. If you don't believe that ... visit the Mac software shelf in a store and its about 1/10th the size of the Windows section.  The PC hardware section in Frys runs along almost one full wall and at least a dozen rows.  The Mac section ... two rows.

Ironically, two of the best selling products on the Mac ... Adobe Photoshop and Microsoft Office.  Steve Jobs should be thankful that these vendors reach out to Mac users and provide them some of the great software packages that PC users enjoy.  Those two rows in the store are going to look pretty empty if they lose some of these popular titles.  And you also shouldn't blame a vendor as the #1 reason for your system to crash.  Jobs basically called Adobe Flash junk not only in terms of security but performance.

Apple doesn't invent anything... they repackage ideas in their own shiny overpriced way and the Apple snobs eat it up.  But the fact is Apple products have many flaws (ex: if your battery dies on your iPad, iPhone, or iPod... you have to buy a refurbished one; no replacement batteries)  and shouldn't be throwing stones.