Monday, May 28, 2012

Windows Command Line Hash Check

Knowing that, "Linux has been doing this for years" I decided to add make a command line method for checking the MD5 and SHA hashes of downloaded files. I used PowerShell to create a function and added it to my profile so that I can call it whenever I want. First, the code for the function:

function Check-Hash {
 [switch]$SHA1 = $false,
 [switch]$SHA256 = $false,
 [switch]$MD5 = $false

function computeHash {
$stream = New-Object System.IO.FileStream($file, [System.IO.FileMode]::Open)
$stringBuilder = New-Object System.Text.StringBuilder
$hash = $algo.ComputeHash($stream) | % { [void] $stringBuilder.Append($_.ToString("x2")) }

if ($SHA1) {
  $algo = New-Object System.Security.Cryptography.SHA1Managed
  Write-Host "SHA1: " -NoNewline ; computeHash
if  ($SHA256) {
  $algo = New-Object System.Security.Cryptography.SHA256Managed
  Write-Host "SHA256: " -NoNewline ; computeHash
if ($MD5) {
  $algo = New-Object System.Security.Cryptography.MD5CryptoServiceProvider
  Write-Host "MD5: " -NoNewline ; computeHash

You can save this as Check-Hash.ps1, which matches the function name. Next, you can add the path to the .ps1 file to your profile, so that when you start up PowerShell, you'll have this function by default. The link to edit your profile can be found here:

Once added, you can check the hash of a file you downloaded, using any combination of the algorithms. The syntax is, "Check-Hash -filename -SHA1 -SHA256 -MD5". Here is an example of a download and it's matching hash:

Tuesday, May 1, 2012

SharePoint and Reverse Proxies

Recently, I designed a SharePoint 2010 solution that involved an extranet scenario where the intranet portal would be accessed both from inside and outside of the LAN. The portal had to meet the following requirements:

  1. Be securely accessible from outside of the corporate network.
  2. Users should only type one URL and not have to specify port or protocol.
  3. When inside the corporate network, users should not be using SSL (HTTPS).
  4. User experience on the extranet should match the intranet.
I chose Forefront UAG 2010 SP1 to meet the extranet needs. It can provide the security they required as well as would assist with reverse proxying, which would allow the users to not have to deal with specifying different URLs, ports, or protocols.

The Setup

From a high level, the setup put the UAG server on the edge network and split DNS was used so that the URL would route straight to the SharePoint server from inside the network and to the UAG server when outside of the network. 

With this setup, SharePoint pages were served up through UAG and if a user didn't specify HTTPS, the UAG server would redirect all the links to SSL... or so we thought.

The Issue

During testing, it was noticed that on the My Sites page, users' pictures were not displaying in the out-of-the box Silverlight Organizational Chart web part and were instead replaced with a "green shirt" icon different from the one displayed when a user has no picture uploaded. This was only occurring when accessing the site through UAG and only with the org chart web part. Profile pictures showed up in other places.

I opened up Fiddler to check out what was going on, and discovered that the link for the picture in the Siliverlight app was coming through as HTTP in the JSON code, and was not being converted to HTTPS by UAG.

The Explanation

UAG does not inspect the packets coming across it for SSL redirects; they are only done in the headers. The image URL for this web part was inside of the JSON code, which UAG does not alter. Pictures showed up fine in other places because they are loaded into the page with GET requests, which UAG does alter. I needed to find a way to force SharePoint to send SSL URLs to users accessing through UAG, but not accessing it from the corporate network. I also had to make sure that the users didn't have to type a different URL, port, or protocol.

The issue lied with both SharePoint and the UAG server. Since UAG was acting as a reverse proxy, what would happen was:

  1. User types, "my.vmlab.loc" into the browser.
  2. UAG redirects to "https://my.vmlab.loc" and forces user to log in.
  3. Once logged in, the UAG server initiates a new connection to the SharePoint farm on behalf of the user with non-SSL. That is, "http://my.vmlab.loc."
  4. SharePoint sees this request coming in over port 80 with a host header of, "my.vmlab.loc" and directs it to the IIS site hosting My Sites.
  5. SharePoint sends the requests back to the UAG server as, "http://my.vmlab.loc", which then sends it back to the user as "https://my.vmlab.loc."
The kink in this process is that SharePoint always believes all requests are coming from, "http://my.vmlab.loc." This is correct in that both internal network users and UAG are not expecting to communicate over SSL. The problem is URLs generated from scripts that are not redirected by UAG are getting sent back to the client with the wrong protocol.

The Fix

The trick to keeping all the requirements of the portal and serving up the pages correctly lies with both UAG and SharePoint. What is needed is a way to tell SharePoint that UAG requests are "different" and need special treatment. On the SharePoint end, this can be accomplished using Alternate Access Mappings (AAM). This will tell SharePoint that if it receives a request from a certin URL, in this case what UAG sends it, to send out a different URL. When SharePoint generates URLs inside of scripts, it will send out something different from the originating URL it received. I needed to rework the process above to achieve the following:

  1. User types, "my.vmlab.loc" into the browser.
  2. UAG redirects to "https://my.vmlab.loc" and forces user to log in.
  3. Once logged in, the UAG server initiates a new connection to the SharePoint farm on behalf of the user with non-SSL. This time, UAG will send down a "dummy" host header so that SharePoint knows that the request is coming from UAG. In this example, I went with, ""
  4. SharePoint sees this request coming in over port 80 with a host header of, "" and directs it to the IIS site hosting My Sites because of the AAM.
  5. SharePoint sends the requests back to the UAG server as, "", which then sends it back to the user as "https://my.vmlab.loc."
The key point here is that with the AAM in place, when SharePoint gets a request with the host header of, "" it will generate scripts using the public URL we specify. 

Here is a picture of how the AAM was setup initially:

To change this, we need to extend the My Sites web application with a zone that uses the dummy host header. An example of what I chose is below:

You can double check IIS to see the result of your change:

From here, you need to change the AAM so they match something similar to this:

Note that SharePoint now knows to send out, "https://my.vmlab.loc" when it gets requests with, "" as its host header. This setup will not break functionality from within the network, as the AAM for "http://my.vmlab.loc" still maps to non-ssl and does no altering.

Next, we have to head over to the UAG server and change it so that it alters the host header on the way in and out for SharePoint. Here is a picture of how it was setup before the changes:

And here is the spot you'll need to change to make sure this magic happens:

Once these changes are applied, the pictures in the web part will load correctly:

Bonus: Confirmation

As a bonus, you can see what's going on underneath. Here is a picture of the JSON packet from Fiddler, which now displays the correct URL:

You can also see that with a regular picture GET, the machine is using, "my.vmlab.loc" as the host header and getting back SSL URLs:

Lastly, I took a capture with WireShark on the UAG server to show how it sees things. As you can see in the picture below it is changing the host header to the dummy one:

Saturday, March 31, 2012

Cloning Your SharePoint 2010 Farm

More than once, I've had a client who's attempted to clone their virtual production SharePoint 2010 farm for development testing and run into errors. The most common steps they take once they clone the system is to assign the network interfaces new IPs and rename the host names. This is not enough as both SQL Server and SharePoint will have references to their old names in their configurations.

I don't recommend this method for a couple reasons and suggest that you should build a dedicated development farm that matches the configuration of your production environment. One issue is the cloned machines will retain their system SIDs and it's recommended that a cloned VM be generalized with Sysprep to address this. Also, if your Farm has jobs or processes that effect other systems, for example your Profile Service writing changes to Active Directory, those jobs will attempt to run.

With that said, here are the steps to take if you'd like to still go this route:

On All Servers

  1. Clone your farm member servers using your hypervisor's built in methods.
  2. Make sure the virtual adapters on the cloned machines are not connected. This will prevent network IP and NETBIOS name conflicts when they power on.
  3. Assign the machines new IP addresses but still don't connect the virtual interfaces.
  4. Rename the host name on each farm member and reboot. This can be done in the System Properties:
Once the systems come back up and are renamed, you may connect the virtual interfaces.

On the SQL Server

  1. On your SQL Server instance, you'll have to change the server name setting. This can be done in Magement studio (SSMS) with the following query:

    DECLARE @DropOld NVARCHAR(150)
    SET @DropOld = N'sp_dropserver ''' + @@SERVERNAME + N''''
    SET @AddNew = N'sp_addserver ''' + HOST_NAME() + N''', ''local'''
    exec (@DropOld)
    exec (@AddNew)
  2. Restart the SQL Server service to allow the changes to apply.

On the SharePoint Servers

  1. Run the following PowerShell command:

    Rename-SPServer [-Identity] <OriginalServerName> -Name <NewServerName>

    Rename-SPServer -Identity "SPWFE01" -Name "DEV-SPWFE01"
  2. List all of the URLs in your farm with the following PowerShell Command:

    Get-SPWebApplication -IncludeCentralAdministration
  3. Rename all URLs to new server name with PowerShell, if they contain server name. This must be done for each zone that needs to be changed as well:

    New-SPAlternateURL -Url http://newserver -WebApplication http://oldserver -Zone "Default"

  4. Add any domain accounts, if needed, to the web apps and Central Admin.

Recreate the User Profile Service Application

You probably will have to recreate your UPS application as it will have references to your old server names. 

  1. Go to Central Admin > Manage Service Applications and highlight User Profile Service Application. Then click Delete in the Ribbon.
  2. Do NOT click the checkbox in the dialog box that pops up. Leaving this unchecked keeps the UPS data.
  3. Open SQL Server Management Studio (SSMS) and find the three UPS databases. They are (your full name will vary):
  4. Delete the SyncDB, as it is only used by SharePoint as a staging area and does not contain useful data.
  5. Create a new User Profile Service Application and provide the exact names of the Sync and Social databases as they exist in SSMS.
  6. Once the UPS application is created, add your content search account as an administrator with the permission “Retrieve People Data for Search Crawlers”.
  7. Restart the UPS Synchronization service.
  8. In Central Admin go to Manage Services On Server.
  9. Start the User Profile Synchronization Service.
  10. Supply all needed information in the prompts.