Repopulating the Newsfeed Cache after a Server Restart

In SharePoint 2013, the Newsfeed relies on data cached in the distributed cache service which behind the scenes is using the appfabric cluster service. Newsfeed data is lost when you restart a server in your farm running this service without doing a graceful shutdown (http://technet.microsoft.com/en-us/library/jj219613.aspx#graceful).

Stop-SPDistributedCacheServiceInstance -Graceful
Remove-SPDistributedCacheServiceInstance

Sometimes you have to restart all servers in the farm and then your newsfeed will be empty.  There is a timer job that runs every 5 minutes called “Feed Cache Repopulation Job” which according to this website http://technet.microsoft.com/en-us/library/jj219560.aspx is supposed to autopopulate the newsfeed cache from the content stored in SharePoint.  Our SharePoint 2013 farm is on the March 2013 PU and this job did not seem to be repopulating the cache.

The article seemed to imply you could run some powershell scripts as well to accomplish the same thing.  I tried these:

Update-SPRepopulateMicroblogLMTCache
Update-SPRepopulateMicroblogFeedCache

The parameter for the first one was easy, just pass in your UPA proxy.  The second one also needed this proxy but it could also include an account name or a site url (http://technet.microsoft.com/en-us/library/jj219749.aspx).  The wording states that when using the account name use the “user account name for the user profile service application”.  I took this to mean the UPA service account.  I tried that and even after waiting several hours, there still wasn’t any repopulation.  So I tried the site url option passing in the mysite host url.  Still nothing.

I finally figured out after using reflector on the source code that the account name it was expecting was an account of a user to repopulate THAT user’s information.  I updated my script to the code below to run the Update-SPRepopulateMicroblogFeedCache for EACH user in the UPA and my newfeed cache started coming back to life!

$proxy  = Get-SPServiceApplicationProxy | ? {$_.Name -eq "MySite User Profile Service"}
Update-SPRepopulateMicroblogLMTCache -ProfileServiceApplicationProxy $proxy

[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.Office.Server")
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.Office.Server.UserProfiles")

$url = "https://mysite.company.com"
$contextWeb = New-Object Microsoft.SharePoint.SPSite($url);
$ServerContext = [Microsoft.Office.Server.ServerContext]::GetContext($contextWeb);

$UserProfileManager = New-Object Microsoft.Office.Server.UserProfiles.UserProfileManager($ServerContext);

$Profiles = $UserProfileManager.GetEnumerator();

foreach ($oUser in $Profiles ) {
	if ($oUser.item("SPS-PersonalSiteCapabilities").Value -eq 14 ){
		$personalurl = $url + $oUser.item("personalspace").Value
		Write-Host $oUser.item("AccountName").Value
		Update-SPRepopulateMicroblogFeedCache -ProfileServiceApplicationProxy $proxy -accountname $oUser.item("AccountName").Value 
		#-siteurl $personalurl
	}
}

$contextWeb.Dispose()

The first time through I got a couple of errors so I added the if statement to check for the personalsitecapabilities being equal to 14.  After that I got less errors but there still were a few.  That’s when I tried going the site url route.  I was thinking if I send in the url of a user’s personal site, that it might work better.  I didn’t get any errors but it also didn’t repopulate the newsfeed for the users.  Oh well…

I now believe that the siteurl parameter is to repopulate the newsfeed cache for any sites that have the newsfeed on the homepage like the new SP 2013 team site template.  I know our environment doesn’t have any of these so I skipped this part.  I was thinking at some point I will need to figure this out though.  Hopefully it won’t involve looping through all sites in my farm but my gut says it will.  If someone else has figured out a good solution, please post the powershell code in the comments.  Thanks.

Update 3/29/2014:  

This issue has been resolved in Service Pack 1 (SP1).

Getting Active Directory UserId from Windows Claim in SharePoint 2013

We’ve always used NTLM for our SharePoint authentication but in SharePoint 2013, claims is the preferred authentication method.  Fortunately, SharePoint 2013 ships with something called Windows Claims.  This seems to work the same as the NTLM auth from before but that windows auth is converted into a claim that SharePoint can use.

This change means that your userid would look something like this:

i:0#.w|contoso\chris

instead of this:

contoso\chris

Sometimes when calling other services, you need the windows userid and not the claim userid.  So for these instances, I’ve created a few helper methods.

//Regex needs more testing
public const string CLAIMS_REGEX = @"(?<IdentityClaim>[ic])?:?0(?<ClaimType>.)(?<ClaimValueType>.)(?<AuthMode>[wstmrfc])(\|(?<OriginalIssuer>[^\|]*))?(\|(?<ClaimValue>.*))";
 
public static string GetAdUserIdForClaim(string login)
{
    string userName = login;
 
    foreach (Match m in Regex.Matches(login, CLAIMS_REGEX, RegexOptions.IgnoreCase))
	{
		try
		{
			if (m.Groups["AuthMode"].Captures[0].Value.ToLower() == "w")
			{
				userName = m.Groups["ClaimValue"].Captures[0].Value;
			}
		}
		catch { }
	}
    return userName;
}
 
public static string GetClaimForAdUserId(string login)
{
    string userName = login;
    SPClaimProviderManager mgr = SPClaimProviderManager.Local;
    if (mgr == null) return userName;
 
    SPClaim claim = new SPClaim(SPClaimTypes.UserLogonName, login, "http://www.w3.org/2001/XMLSchema#string", SPOriginalIssuers.Format(SPOriginalIssuerType.Windows));
    userName = mgr.EncodeClaim(claim);
 
    return userName;
}
 
public static bool IsLoginClaims(string login)
{
    Regex re = new Regex(CLAIMS_REGEX, RegexOptions.IgnoreCase);
    return re.IsMatch(login);
}

First I made a regular expression to identify the different pieces of a claim (see http://social.technet.microsoft.com/wiki/contents/articles/13921.sharepoint-2013-and-sharepoint-2010-claims-encoding.aspx).  This allows me to effectively parse the claim for the windows login name (see GetAdUserIdForClaim).  This also allows me to validate whether a string is a claim or not (see IsLoginClaims).

Update 01-22-2015:

After some more usage, I found that I was being too limiting in the Claim Types and Claim Value Types in my regex.  I had based the options from the technet article above but I then ran into some other Claim Types when doing some work recently that were not in that article.  I then found this page:  http://blogs.msdn.com/b/scicoria/archive/2011/06/30/identity-claims-encoding-for-sharepoint.aspx which listed a lot more than the technet article.  It also now seems that almost any value could be there in the future.  Because of this I changed the regex in the code above to allow any value in those two fields.

Gotcha’s using Custom Web Parts and the Minimal Download Strategy

I’ve been playing around with some of my custom code in SharePoint 2013.  One of the issues I’ve been noticing is when I add any of my custom web parts to a page, the minimal download strategy (MDS) would failover to the normal asp.net webforms page.  You can tell the difference by looking at the url.  A url that looks like this:

/_layouts/15/start.aspx#/SitePages/Home.aspx

is using MDS.  Notice the actual page is start.aspx and then the page it is ajax loading (MDS) is /sitepages/home.aspx (the part after the hash(#)).  Whereas a normal asp.net webforms page url would look like this:

/SitePages/Home.aspx

They both look the same but when using MDS you get an added benefit of less being downloaded on each click and also smoother and snappier page changes.

Gotcha #1 – Decorate your assembly or class with MDSCompliant(true).  If your MDS isn’t working and you see this message in your ULS:

MDSLog: MDSFailover: A control was discovered which does not comply with MDS

then you will need to add the attribute (http://msdn.microsoft.com/en-us/library/microsoft.sharepoint.webcontrols.mdscompliantattribute.aspx).  Here is an example of adding it to the class:

[MdsCompliant(true)]
public class MyWebPart : WebPart
{
	//...Code
}

And here is an example of adding it to an assembly (assemblyinfo.cs):

[assembly: MdsCompliant(true)]

Every control that is loaded on the page needs this attribute.  So if you are loading other controls in your web part, you’ll need to make sure they also have this attribute.

Gotcha #2 – Declarative user controls typically used for delegate controls need a codebehind and the attribute set as well.  I use a lot of delegate controls and then use features to swap out user controls in the delegate controls to add / remove functionality on the site.  Typically my user controls didn’t have a codebehind and would just add webparts, html or other controls in the markup.  The issue is if you want these controls to be able to be swapped out using MDS, then you will need to add a codebehind to the user control and decorate it with the MdsCompliant attribute.

So a normal user control like this:

<%@ Control Language="C#" AutoEventWireup="true"  %>
<%@ Register TagPrefix="MyWebParts" Namespace="MyWebParts"%>

<MyWebParts:WebPart runat="server" Title="WebPart"></MyWebParts:WebPart>

would need to be converted to this:

<%@ Assembly Name="$SharePoint.Project.AssemblyFullName$" %>
<%@ Control Language="C#" AutoEventWireup="true" CodeBehind="MyControl.ascx.cs" Inherits="MyControl" %>
<%@ Register TagPrefix="MyWebParts" Namespace="MyWebParts"%>

<MyWebParts:WebPart runat="server" Title="WebPart"></MyWebParts:WebPart>

and with the following codebehind:

[MdsCompliant(true)]
public partial class MyControl : UserControl
{
    protected void Page_Load(object sender, EventArgs e)
    {
    }
}

I couldn’t figure out a way to decorate the usercontrol without using a code behind.  If anyone else knows how to do this, please comment or contact me with the info.  Thanks!

Gotcha #3 – Inline scripts are not allowed and will need to be added using the SPPageContentManager.  If you receive any of the following messages in your ULS logs you will need to look at your content.

MDSFailover: document.write
MDSFailover: document.writeln
MDSFailover: Unexpected tag in head
MDSFailover: script in markup

The first two are obvious; you can’t have any document.write or document.writeln’s in your html.  The third is a little less obvious.  According to a MS source, in the head these tags are not allowed:

  • Meta refresh tag
  • link tag for a stylesheet (text/css)
  • script tag
  • style tag
  • title tag
  • base tag

The fourth was a big kicker for me.  I had made the decision a few years ago to switch almost all of my webparts over to being XSLT rendered.  That means I had a lot of inline javascript and css in my xslt.  Luckily, I had previously also created a special webpart which could ajax load other webparts using an update panel and had solved the inline script issue before.  When I was writing my special ajax loading webpart before I found this page http://www.codeproject.com/Articles/21962/AJAX-UpdatePanel-With-Inline-Client-Scripts-Parser which showed how to extend the ootb update panel to automatically find inline scripts and register them to work correctly using ajax. I was able to slightly modify this code to work for my XSLT rendered webparts.

public bool IsAnAjaxDeltaRequest
{
	get
	{
		return false == String.IsNullOrEmpty(Context.Request.QueryString["AjaxDelta"]);
	}
}

protected override void Render(HtmlTextWriter output)
{
	base.Render(output);
	string html = GetHtml();
	if (IsAnAjaxDeltaRequest)
	{
		html = RegisterAndRemoveInlineClientScripts(this, this.GetType(), html);
	}
	output.Write(html);
}

public static readonly Regex REGEX_CLIENTSCRIPTS = new Regex(
"<script\\s((?<aname>[-\\w]+)=[\"'](?<avalue>.*?)[\"']\\s?)*\\s*>(?<script>.*?)</script>",
RegexOptions.Singleline | RegexOptions.IgnoreCase | RegexOptions.Compiled |
RegexOptions.ExplicitCapture);

public static string RegisterAndRemoveInlineClientScripts(Control control, Type type, string htmlsource)
{
	if (htmlsource.IndexOf("<script", StringComparison.CurrentCultureIgnoreCase) > -1)
	{
		MatchCollection matches = REGEX_CLIENTSCRIPTS.Matches(htmlsource);
		if (matches.Count > 0)
		{
			for (int i = 0; i < matches.Count; i++)
			{
				string script = matches[i].Groups["script"].Value;
				string scriptID = script.GetHashCode().ToString();
				string scriptSrc = "";

				CaptureCollection aname = matches[i].Groups["aname"].Captures;
				CaptureCollection avalue = matches[i].Groups["avalue"].Captures;
				for (int u = 0; u < aname.Count; u++)
				{
					if (aname[u].Value.IndexOf("src",
						StringComparison.CurrentCultureIgnoreCase) == 0)
					{
						scriptSrc = avalue[u].Value;
						break;
					}
				}

				if (scriptSrc.Length > 0)
				{
					SPPageContentManager.RegisterClientScriptInclude(control,
						type, scriptID, scriptSrc);
				}
				else
				{
					SPPageContentManager.RegisterClientScriptBlock(control, type,
						scriptID, script);
				}

				htmlsource = htmlsource.Replace(matches[i].Value, "");
			}

		}
	}
	return htmlsource;
}

Since this code will automatically register any script references it finds, make sure that the paths to your scripts are correct, otherwise MDS will silently failover without any ULS messages.

Update 5/3/2013:

I have found another potential MDS error message in ULS:

MDSLog: Master page version mismatch occurred in MDS

or in the ajax response if using fiddler:

versionMismatch

I was able to resolve this by browsing to another site and then back to the original site with the issue.  Weird one, if anyone knows more about this error, please contact me or comment below.

BlobCache issues with time difference between SharePoint WFE and SQL

We recently ran into an interesting issue where when a user uploaded an image into SharePoint and then tried to view that image, they would receive an error.  For the rest of the day when viewing the image they would continue to get the error but the image would work fine for others.  If the user cleared their browser cache then the image would start working for them.  Also, after uploading an image, if the user waited for a few minutes before viewing the image it would work as expected.  The error the end user saw was “An Unexpected error has occurred” but by looking at the real error revealed the following:

Message: Specified argument was out of the range of valid values.
Parameter name: utcDate
Stack Trace:    at System.Web.HttpCachePolicy.UtcSetLastModified(DateTime utcDate)   at System.Web.HttpCachePolicy.SetLastModified(DateTime date)   at Microsoft.SharePoint.Publishing.BlobCache.<>c__DisplayClass42.<SendCachedFile>b__41()   at Microsoft.SharePoint.SPSecurity.<>c__DisplayClass4.<RunWithElevatedPrivileges>b__2()  at  Microsoft.SharePoint.Utilities.SecurityContext.RunAsProcess(CodeToRunElevated secureCode)   at Microsoft.SharePoint.SPSecurity.RunWithElevatedPrivileges(WaitCallback secureCode, Object param)   at Microsoft.SharePoint.SPSecurity.RunWithElevatedPrivileges(CodeToRunElevated secureCode)   at Microsoft.SharePoint.Publishing.BlobCache.SendCachedFile(HttpContext context, BlobCacheEntry target, SPUserToken currentUserToken, SiteEntry currentSiteEntry)   at Microsoft.SharePoint.Publishing.BlobCache.HandleCachedFile(HttpContext context, BlobCacheEntry target, Boolean anonymousUser, SiteEntry currentSiteEntry)   at Microsoft.SharePoint.Publishing.BlobCache.RewriteUrl(Object sender, EventArgs e, Boolean preAuthenticate)   at Microsoft.SharePoint.Publishing.PublishingHttpModule.AuthorizeRequestHandler(Object sender, EventArgs ea)   at System.Web.HttpApplication.SyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute()   at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously)

Looking at the error told me a few things.  One, this seemed to be time related and two, this seems to only affect files that are stored in the blob cache.  That also explained why clearing the browser cache worked to “fix” the issue for a user getting the error and why other users did not receive the error.  Because of the way that SharePoint’s BlobCache optimizes things, it will only send the image to the browser if the image has changed assuming the browser has a cached version.  Since the first time the user viewed the image generated an error, then every time after that the browser was just displaying a cached version or the error.  This was also apparent because the correlation id was the same GUID each time they viewed the image.  In SharePoint, each request gets it’s own guid which is the correlation id so they should never be the same between requests.

I spent some time reflectoring the code and the best I could determine was that it was sending the last modified date of the image in the response header and for some reason that date was in the future on the WFE’s where the blobcache was running and thus the error.  I at first thought that maybe the client’s time was causing this when uploading an image through explorer view but that didn’t seem to affect it.  To be honest, I was stumped for a little while.  Then that evening I was out for a run and it hit me, I bet the SQL box’s time was ahead of the WFE’s.

I tested this theory by viewing the time on the SQL box and the WFE and since SQL was ahead by over a minute I waited until they were at different minute values and uploaded a file.  The last modified time of the file showing in SharePoint was actually the time on the SQL server and not on the WFE.  At this instant the last modified time of the file was in the future on the WFE.  So it seems that the stored procedure that SP calls to add a document to SharePoint calls getutcdate() and thus uses the SQL server time.

I got our infrastructure guys involved and had them look into the time issues.  Once they got those resolved our image issues went away.  I know normally all computers in an active directory domain have the same time but in this instance, we are migrating to a new AD environment and our SQL boxes were on one domain and the WFE’s were on another.

Update 5/3/2013

After talking with our infrastructure team, it turns out the issue was around our domain controllers being virtual.  This is an issue because by default VM’s get their time from the Hyper-V host instead of the PDC emulator on the domain as they should be.  Basically, they needed to uncheck the box which says that the VM get’s their time from the Hyper-V host.

Update 5/30/2013

It seems like this issue has gotten a lot better but it still pops up every once in a while.  My infrastructure team pointed me to this blog article:  http://blogs.msdn.com/b/virtual_pc_guy/archive/2010/11/19/time-synchronization-in-hyper-v.aspx.  It states that “we put logic in our integration service that will not change the time if the virtual machine is more than 5 seconds ahead of the physical computer”.  Has anyone else come across this issue and resolved it completely?

Following Sites across Farms with SharePoint 2013 MySites

I’ve been struggling recently with trying to get following sites working correctly with SharePoint 2013 mysites.  (See my forum post here).  If you don’t know what functionality I am referring to, see this blog post:  http://sharepoint.microsoft.com/blog/Pages/BlogPost.aspx?pID=1060.  I was attempting the scenario where you have two separate farms each with their own user profile service application (UPA).  This could be an intranet where mysites are located and an extranet where you are only using the UPA for AD synchronization.  Or it could be two farms located in different parts of the world each with their own UPA and potentially their own mysite implementation.  For me I need to be able to handle both situations simultaneously.

At first I was getting any of the following errors in the ULS logs of either server or in the message headers which I packet sniffed:

FollowedContent.FollowItem:Exception:System.Net.WebException: The remote server returned an error: (401) Unauthorized.
{"error_description":"Invalid JWT token. Could not resolve issuer token."}
x-ms-diagnostics: 3000006;reason="Token contains invalid signature.";category="invalid_client"
x-ms-diagnostics: 3002002; reason=App principal does not exist

Luckily, after a lot of time spent getting this to work I believe I have a solution.  First thing is first, all farms that need to connect need to use the same realm.  In my powershell scripts below I will use the realm:  myrealm.  Also I’ll be using the farm names of mysitehost and collaboration where mysitehost is the farm which hosts the mysites and collaboration is the external facing farm.

First follow the first few steps here:  http://technet.microsoft.com/en-us/library/ee704552.aspx on creating and copying the certs.  Then run this code on the mysitehost farm (publishing farm).

$trustCert = Get-PfxCertificate "C:\ConsumingFarmRoot.cer"
New-SPTrustedRootAuthority "COLlABORATION" -Certificate $trustCert

$stsCert = Get-PfxCertificate "c:\ConsumingFarmSTS.cer"
New-SPTrustedServiceTokenIssuer "COLlABORATION" -Certificate $stsCert

$farmid = "<guid>"; #Get the farm id from the collaboration farm by typing Get-SPFarm | Select Id into powershell
$security = Get-SPTopologyServiceApplication | Get-SPServiceApplicationSecurity 

$claimProvider = (Get-SPClaimProvider System).ClaimProvider 

$principal = New-SPClaimsPrincipal -ClaimType http://schemas.microsoft.com/sharepoint/2009/08/claims/farmid -ClaimProvider $claimProvider -ClaimValue $farmid 

Grant-SPObjectSecurity -Identity $security -Principal $principal -Rights "Full Control" 

Get-SPTopologyServiceApplication | Set-SPServiceApplicationSecurity -ObjectSecurity $security 

Set-SPAuthenticationRealm -realm "myrealm"
$sts=Get-SPSecurityTokenServiceConfig
$Realm=Get-SpAuthenticationRealm
$nameId = "00000003-0000-0ff1-ce00-000000000000@$Realm"
Write-Host "Setting STS NameId to $nameId"
$sts.NameIdentifier = $nameId

$c = Get-SPSecurityTokenServiceConfig
$c.AllowMetadataOverHttp = $true  #needed if you are not using ssl
$c.AllowOAuthOverHttp=$true #needed if you are not using ssl
$c.Update()

iisreset

Write-Host "Run Consumer Server Script and then press any key to continue ..."
$x = $host.UI.RawUI.ReadKey("NoEcho,IncludeKeyDown")

New-SPTrustedSecurityTokenIssuer –MetadataEndpoint "http://collaboration.com/_layouts/15/metadata/json/1" –Name "collaboration metadata" -RegisteredIssuerName $nameId

Notice the section of the script that will pause and wait while you run the following script on the collaboration farm (consumer).

$trustCert = Get-PfxCertificate "C:\PublishingFarmRoot.cer"
New-SPTrustedRootAuthority "mysitehost" -Certificate $trustCert

Set-SPAuthenticationRealm -realm "myrealm"
$sts=Get-SPSecurityTokenServiceConfig
$Realm=Get-SpAuthenticationRealm
$nameId = "00000003-0000-0ff1-ce00-000000000000@$Realm"
Write-Host "Setting STS NameId to $nameId"
$sts.NameIdentifier = $nameId

$c = Get-SPSecurityTokenServiceConfig
$c.AllowMetadataOverHttp = $true  #needed if you are not using ssl
$c.AllowOAuthOverHttp=$true #needed if you are not using ssl
$c.Update()
Iisreset

New-SPTrustedSecurityTokenIssuer –MetadataEndpoint "http://mysitehost.com/_layouts/15/metadata/json/1" –Name "mysitehost metadata" -RegisteredIssuerName $nameId

The main difference in my scripts vs the MS documentation is I also include the registeredissuername in the New-SPTrustedSecurityTokenIssuer command (last command in each script).  I was having issues with this value not being set correctly and for some reason was unable to change it after it was created.

Now on your collaboration farm, you will need to setup a separate UPA and set the trusted mysite host locations (http://technet.microsoft.com/en-us/library/ee721061.aspx).  Now the way that site following works is you request a site to follow on your farm, it will look at the farm’s UPA to see find out the url of your mysite (http://mysitehost/personal/domain_userid).  Then the server will basically forward your request to your mysite url’s webservice and since you are most likely using claims based authentication (best practice for SP 2013), then your claim is sent over and the request is done using your creds.  So if the personal site (PersonalSpace) User Profile property is empty on your farm’s UPA, then it won’t work; you’ll get a message about your mysite is still being created.  To get it to work you need to put a valid site relative path for all user’s for the personal site user profile property.  I first tried using the correct value (/personal/domain_userid) and of course that worked but I also tried just putting a / for the field and that worked as well.  But in the interest of correctness, I decided to implement the  User Profile Replication Engine.  I couldn’t find the 2013 version even though the cmdlets are documented (http://technet.microsoft.com/en-us/library/ee906542.aspx).  So I had to download and install the SharePoint 2010 Administration tool kit (http://technet.microsoft.com/en-us/library/cc663011.aspx#section3).  You only need to install it on one server in one of the farms.  They do suggest you install it on the source server because there is less data being sent to the destination farm than being retrieved from the source farm.  Below is my powershell code to sync just the personal site (personalspace) user profile property between my mysitehost farm and my collaboration farm.

Add-PsSnapin Microsoft.Office.Server.AdministrationToolkit.ReplicationEngine
#Full Sync
Start-SPProfileServiceFullReplication -Destination "http://collaboration.com" -Source "http://mysitehost.com" -Properties "PersonalSpace" -EnableInstrumentation 

#Incremental Sync
Start-SPProfileServiceIncrementalReplication -Destination "http://collaboration.com" -Source "http://mysitehost.com" -FeedProperties "PersonalSpace" -Properties "PersonalSpace" -ReplicationInterval 1 -EnableInstrumentation

Now if you are just setting up an extranet where you want internal employees to be able to follow sites but external users to not have any mysite functionality other than their AD accounts are synchronized, then you are done.  If you are connecting to another UPA which also has mysites and you need to be able to “share” feeds and utilize other social features, you will need to perform the same steps above as if it was the collaboration farm and a few additional steps.  Make sure you perform steps 2, 4 and 5 here:  http://technet.microsoft.com/en-us/library/ff621100.aspx.  These would be

There you have it, it turns out the piece I was missing for so long was setting the realm and registeredissuername correctly on both farms so that the claim could be accepted and decrypted properly.  If you are planning on connecting UPA’s from different farms where the farms do not reside in the same datacenter, please read this page on some known issues you might see:  http://technet.microsoft.com/en-us/library/cc262500.aspx#geodist