Wednesday, June 28, 2017

Email Approval Considerations

Recently wrote a service for handling lazy approvals.  IE someone gets an email that they can reply to with an "approved" or "rejected" response and the system uses that to continue its workflow/process.

I can't share the code for it but wanted to overview some of the challenges/considerations I ran into in case anyone is looking to write their own.

1.  Validation:  I originally added a custom MessageId into the emails that I logged along with the recipient address so I could validate an email response was "valid".  However, replying to a message generates a new MessageId and the original wasn't available to me in the 3rd party IMAP library I used.

2. Parsing:  When trying to determine what request the email approval was in reference to there is parsing involved.  This can get tricky if the email has been forwarded, replied to several times, or the subject altered.  I found in one case when testing a peer replied and their web-based email client put some unexpected formatting into the message.  A good solution is to use TDD (test driven design) and build some good tests to validate these scenarios as they arise.

3. Responses: To allow for some flexibility I had to create a dictionary of words/phrases that tied back to the response I was expecting.  IE "approved" would be considered approved if the response was "yes" or "affirmative".

4. Approval process workflow:  What happens when you reply approved but quickly change your mind and reply rejected?  Does the last response become valid?  Is the 1st response valid?  If the response is invalid do you allow users to reply until a valid response is given?

Monday, September 19, 2016

Deep Object Comparison

As I mentioned in my last post you can't compare reference types using the == operator because it compares the memory addresses not the values. So even with everything being the same two objects will never equal each other.

Below is some sample code that actually does a deep comparison of object to figure out if they are similar or not.

By deep I mean it will recursively follow all enumerations down to their lowest levels and compare them as well.

You might notice that there is no comparison to check the objects are the same type.  If you are not familiar with T it is a generic and forces the compiler to ensure objects are of the same type to be passed into this method.

You'll also notice that values are compared as strings but for my purposes and at least to get you started this is sufficient.

If there was data you wanted to return you could also modify the return type to be a response object with a bool to indicate if objects are the same and a list of which fields differed per your needs.

      public bool ObjectsAreSame(T a, T b)
        {
            bool result = true;
            var t = a.GetType();
            var properties = t.GetProperties();

            foreach (var p in properties)
            {
                if (typeof(IEnumerable).IsAssignableFrom(p.GetType()) || typeof(IEnumerable<>).IsAssignableFrom(p.GetType()))
                {
                    ObjectsAreSame(a.GetType().GetProperty(p.Name).GetValue(a, null), b.GetType().GetProperty(p.Name).GetValue(b, null));
                }
                else
                {
                    if (a.GetType().GetProperty(p.Name).GetValue(a, null) != null && a.GetType().GetProperty(p.Name).GetValue(b, null) != null)
                    {
                        if ((a.GetType().GetProperty(p.Name).GetValue(a, null).ToString()) != (b.GetType().GetProperty(p.Name).GetValue(b, null).ToString()))
                        {
                            result = false;
                            break;
                        }
                    }
                }
            }

            return result;
        }

Thursday, September 15, 2016

C# Value vs Reference Types

A refresher for those who may have forgotten about the wild world of value versus reference type.

Assume you have an object as follows:

public class Person
{
    public string FirstName {get;set;}
    public string LastName {get;set;}
}

Now assume the following code is executed:

Person a = new Person() { FirstName = "John", LastName = "Doe"};
Person b = new Person() { FirstName = "John", LastName = "Doe"};

Asset.AreEqual(a,b);

This asset will fail.  At initial glance you might think "but the values are the same".  Yes, they are.  But that is not what is being evaluated!  Because a class is a reference type what is being asserted is that the memory location of a is the same as b.  In this case, they are not because they are two separate objects.

There are several ways to address this.  One way would be to override the Equals method on the Person object and do a field by field comparison.  Another way might be to override the ToString method which would return a value type that could be compared.

The way I prefer to do this is to create a method that uses reflection (and generics, recursion) thoroughly compare all the object's fields.  This way there is no refactoring if fields are added/removed/changed.

Value types, however, will pass this asset.  For example:

string a = "Hello!";
string b = "Hello!";

Asset.AreEqual(a.b);

In this case, we're not checking the reference but actually comparing the values.

Another interesting thing here that trips people up is if you pass an object by reference then all changes in a method will be reflected in the original object; not so in a value type.

Assume something like this:

public void ChangeName(Person aPerson)
{
    aPerson.FirstName = "Billy";
}

public void DoSomething()
{
    Person a = new Person() { FirstName = "John", LastName = "Doe"};

    ChangeName(a);

   Asset.AreEqual(a.FirstName, "Billy");
}

This will work because ChangeName received the memory reference to a and updated the name.  For a reference type, this will not work.

Tuesday, August 23, 2016

Entity Framework - Foreign Key Fields

Something to keep in mind when working with objects that require a foreign key value; Entity Framework can handle this automatically.

If you have an object that requires a foreign key value you might have found yourself calling SaveChanges to save one object to the database to get its ID so you could insert it into another object.

The better way is to use the ForeignKey property on your corresponding property.  The convention is [ForeignKey("ForeignTableNameHere")].  You'll also want to a a public property for the foreign table as well.

Now you can call SaveChanges once at the very end and the foreign key will get set across all the necessary objects.


Wednesday, February 18, 2015

OleDB Alternative

Had an interesting issue come up using OleDB worth noting...

I was working on a project to import data from a tab delimited file into SQL using SqlBulkCopy.  The OleDb file count matched the record inserted file count but comparing the output to the old application with the same data produced different results.

I dropped the file into Excel and it also showed the same results as OleDb.  However, when opened in Notepad++ it became apparent that OleDb and Excel (presumably using OleDb to import the file) was merging some of the row data into a description column in the file.

So from all appearances it looked like the process was working but in reality it wasn't.

The workaround was to replace using OleDb with TextFieldParser.  You can find plenty of examples of how this works by searching for it.  You will need to add a reference Microsoft.VisualBasic.FileIO which is odd but that's where they put the TextFieldParser.

Once I implemented this change to read the file into a DataTable the results matched the old application.

Friday, July 11, 2014

SharePoint Online Tidbits

I've been working on a project of late that is using SharePoint Online 2013 as part of the solution.  It was not a platform I chose but one that already existed and I was asked to continue building on.   For those who might be evaluating a cloud SharePoint solution or those who are just starting to work in it these are my tips, tricks, and tidbits.  Please keep in mind this post is specifically about SharePoint Online 2013 and that on premise SharePoint 2013 has differences and limitations that can be overcome.

1. Be aware of the threshold limit.  You can't change this online (you can on premise).  If you have any type of data request that has to deal with more than 5,000 records it won't work.

- If you have 5001 items or more in your "All Items" view; it won't work.
- If you need to search that field by a single key with more than 5000 results; it won't work.
- Same deal with an API call.

Having said that... Yes... you can have lists with over 5,000 items and still work with them.  The big key to this is multiple indexed fields.  You can build views that use those multiple indexes and CAML queries via the API as well.  A client has 1.5 million rows in several SP Online lists that they're querying in this manner.  Do I recommend it?  No.  But it is possible.

2. Service durability.  I'm not sure how they have the Office 365 cloud setup but our instance constantly has problems.  Some days I can't login to the Office 365 admin site.  Some days users can't save pages with content web parts.  Some days something is in extended recovery and other things are slow or quirky.  It has caused issues especially at inopportune times.

3. DEV/QA/Prod Environments.  Out of the box this concept really doesn't exist.  You can't really bundle up a site and deploy it to another site.  There are really 2 ways to try and do this.  One is saving a list or page as a template (with or without content).  It works ok in the same site or in a subsite of the site.  You can export large amounts of data to Excel and import them into other sites but it doesn't preserve the meta data.  By that I mean Site A might be a drop-down list but Site B that drop-down will be a text box. Not to mention intenal field name issues (see below). You can export an entire site as a template too but it has a 50mb limit.

Keep in mind that SharePoint isn't really a traditional IT application.  In that I mean you don't have deliverables (EXEs, DLLs, SQL scripts, etc) that you can deploy to various environments.  It is really a publishing model.  You checkout a page, you save the changes, and when you're ready for it to go "live" you check it in.

When you want to have a QA team test your solution/changes end-to-end you have a lot of work to build a suitable QA site or even a DEV site.  I've done it but it is tedious.

4. Scripting is your friend.  Angular, KnockOut, JQuery, or plain old JavaScript.  Drop them into a Script Editor Web Part and fire away.  We used BxSlider, SimpleModal, KnockOut, the SharePoint JS, etc with no issues.  We're also looking at using Bootstrap for responsive UI.

SharePoint Online does NOT want you running resource intensive things on their cloud solution so the more things you push to scripting the happier everyone will be with this solution.  See 9/10 for SharePoint/Offsite App tidbits.

5. Internal field names can be an issue.  Say I have Site A and I created a List called MyList.  And in my list I have firstnnname and Last Name.  I realize after I created the list that I misspelled first name and didn't follow the same convention (spacing and case).  I rename "firstnname" to "First Name".  Everything will seem OK until you use an API.

When you use the API two things happen to be aware of...

First, you'll notice in an API call that there is no "First Name" or "Last Name" field!  First Name will appear in the XML as firstnnname.  "Last Name" will appear as Last_x0020_Name.

SharePoint will use whatever the original name of the field was.  So often times a list will have a "Title" field which will always be Title in the API even if you rename it.  And spaces will be replaced by "_x0020_" because is how SP handles spaces.

Oh hey... remember #3 when I talked about Excel exports?  That's the second thing.  If you exported this hypothetical MyList via Excel there is no meta data.  So when you call the API it WON'T be firstnnname!  It will be First_x0020_Name because that is what SharePoint is going to put into the column header which the Import will use to create the column name.

6. The API is useful but it has limitations.  I used the SharePoint library and Knockout JS (KO) to query lists and bind them to KO templates.  It allowed me to do basically an MVC-eqsue solution inside of SharePoint.  However, the API can't get AD groups and such.  It can pull attributes but not groups.  My workaround was to create a Script Editor Web Part with a target (advanced -> target audience) of an AD Group that set a variable in JavaScript that I could use to determine which user group had logged in.  We only had 1 group that had different content but you could get very complex very quickly with this approach.

I'll also add that luckily only the pieces applicable to the user are rendered so even if you have a bunch of stuff on a page if you're using Target Audiences the intended audience will only get the content rendered for them.

7. Targeting individual list items.  You can do this with a Content Query Web Part (CQWB).  It is smart enough if you go into a list and enable target audiences to return only the items that the logged in user has access to.  You CAN'T add a List Web Part that has target audiences set on list items and expect it to work.  It will show ALL list items.

I found the CQWP limiting because it offered you 4 fields (title, description, link url, and image url [names might be slightly different]) and basically looked like search results.  Apparently you can use CSR (Client-Side Rendering) to doll these up any way you want but my solution didn't do that because I was doing back-end coding and we didn't have a UI/UX resource and didn't know once we got one if he or she would be able to do this.

8. CSS.  Good and bad SharePoint has branding/master page/master template type functionality.  It is great if you want a consistent look and feel but my team found early on that when we were redoing parts of a site that SharePoint was trumping our CSS.  The hack and slash solution is to specify a style element on page elements and style away.  The better approach is to make sure your CSS you are referencing has greater specificity than SharePoint.  To that end anything we wanted to style had a class name and we styled the class not the HTML elements themselves.

9. Custom SharePoint Apps are a mixed bag.  They have some limitations which you can see on the link provided below.  I also find them to be a little slow to load and respond.  It also stinks that an app will be unavailable while a new build is uploaded.  I can edit and save a page in SharePoint with the published version alive and well all day long.

http://msdn.microsoft.com/en-us/library/ff798382.aspx  

10. Cross domain is an option.  You can setup a relying party and do stuff with SharePoint Online on premise and IFrame in content.  A vendor ran into issues with token lifetime which required you to re-authenticate with ADFS inside the frame.

Sunday, March 30, 2014

IIS and .NET

A few things that came up during a deployment that I thought would be worth noting for anyone migrating to a new server...

1. Make sure that the full .NET install is performed.  There is a "client" profile which doesn't include everything and some things are in a different assembly than the "full" framework.  You need to specifically target the client framework in your build to use it.

2. The 2.0 and 3.0 frameworks register MIME types and handlers in IIS that you'll need for ASP.NET and WCF.  You can run aspnet_regiis in the 2.0 and 3.0 folders to get this done manually.  Your 4.0 framework project will have issues if these things were not done.

-JY