BizTalk Server 2013 R2 Pipeline Component Wizard

While working on the current upcoming installment in my BizTalk Server 2013 R2 New Features Series, I realized that I did not yet have a version of Martijn Hoogendoorn’s excellent BizTalk Server Pipeline Component Wizard that would work happily in Visual Studio 2013.

As a result, I filed this issue, and then made the necessary modifications to make it work in Visual Studio 2013 with BizTalk Server 2013 R2. It is now available for download here (updated 27-FEB-2015) if you want to give it a test run.

image

Unfortunately, that has consumed a significant portion of my evening, and thus the next installment in the blog series will be delayed. But fear not, there is some really cool stuff just around the corner.

Integrate 2014 Registration is Live

In other news, Integrate 2014 registration is now live, and is only 72 days away! It’s going to be an awesome start to a cool Redmond December as 300-400 BizTalk enthusiasts from around the globe converge on the Microsoft campus for a few days of announcements, learning, networking, and community.

I’m not going to lie, I’m pretty excited for this one! Especially seeing these session titles for the first day:

  • Understand the New Adapter Framework in BizTalk Services
  • Introducing the New Workflow Designer in BizTalk Services

Well, that’s all for now. I hope to see you there!

BizTalk Server 2013 Support for RESTful Services (Part 4/5)

Did you click a link from the newsletter expecting a post on ESB Toolkit 2.2? Head right over here. Otherwise, keep reading!

This post is the sixteenth in a weekly series intended to briefly spotlight those things that you need to know about new features in BizTalk Server 2013. It is also the fourth part of a five-part series on REST support in BizTalk Server 2013.

If you haven’t been following the REST series, stop reading right now, and then read from the beginning of the series. I’m not re-covering any already covered ground in this post. Winking smile

In this post we will discuss dealing with incoming JSON requests, such that they become readable/map-able XML for which we can generate a schema. Ultimately there are two ways that this can be accomplished. Both of these methods have actually already been extensively documented. The first method is by using a custom pipeline component (demonstrated here by Nick Heppleston), the second is by extending WCF with a custom behavior (briefly discussed here by Saravana Kumar).

Both of the articles linked in the above paragraph really only deal with the receive side of the story. In this article I will set out to demonstrate both receiving JSON data, and responding with JSON data. In the process I will demonstrate two methods of dealing with cross-domain requests (CORS and JSONP), and build pipeline components and pipelines to handle both scenarios.

Getting Started with a Custom Pipeline Component

In order to get started with custom pipeline component development, I will be reaching again to the BizTalk Server Pipeline Component Wizard (link points to the patch for BizTalk Server 2013, which requires InstallShield LE and an utterly maddening registration/download process). I am going to use the wizard to create two projects – one for a JsonDecoder pipeline component and one for a JsonEncoder pipeline component.

Each of these projects will be taking a dependency on Newtonsoft’s excellent Json.NET library for the actual JSON serialization/deserialization. This will ultimately be done through the official NuGet package for Json.NET.

Since we can have JSON data that starts as just a raw list of items, we likely want to wrap a named root item around that for the purpose of XML conversion. In order to choose the name for that root node, we will rely on the name of the operation that is being processed. However, in the case of a single operation with a lame name, we can expose a pipeline component property to allow overriding that behavior.

Another pipeline component property that would be nice to expose is a target namespace for the XML that will be generated.

Creating XML from JSON Data

Creating XML from JSON formatted data is actually supported out of the box with the JSON.NET library. We can do so with the following code:

var jsonString = jsonStream.Length == 0
                 	? string.Empty
                        : Encoding.GetEncoding(inmsg.BodyPart.Charset ?? Encoding.UTF8.WebName)
				  .GetString(jsonStream.ToArray());
           
// Name the root node of the generated XML doc after the operation, unless
// a specific name has been specified via the RootNode property.

var rawDoc = JsonConvert.DeserializeXmlNode(jsonString,
				string.IsNullOrWhiteSpace(this.RootNode)
					? operationName
					: this.RootNode, true);

// Here we are ensuring that the custom namespace shows up on the root node
// so that we have a nice clean message type on the request messages

var xmlDoc = new XmlDocument();
xmlDoc.AppendChild(
	xmlDoc.CreateElement(DEFAULT_PREFIX, rawDoc.DocumentElement.LocalName, this.Namespace));
xmlDoc.DocumentElement.InnerXml = rawDoc.DocumentElement.InnerXml;

In the above snippet, you are seeing the operationName variable out of context. That variable will contain the value of the current operation requested (which is defined by you within the adapter properties for the WCF-WebHttp adapter, and then matched at runtime to the HTTP Method and URL requested).

Another weird thing that we are doing here is making sure that we have full control over the root node, by allowing it to be generated by the JSON.NET library, and then replacing it with our own namespace-qualified root node.

Creating JSON from XML Data

Taking XML data and generating JSON formatted data is nearly as easy:

XmlDocument xmlDoc = new XmlDocument();
xmlDoc.Load(inmsg.BodyPart.Data);

if (xmlDoc.FirstChild.LocalName == "xml")
	xmlDoc.RemoveChild(xmlDoc.FirstChild);

// Remove any root-level attributes added in the process of creating the XML
// (Think xmlns attributes that have no meaning in JSON)

xmlDoc.DocumentElement.Attributes.RemoveAll();

string jsonString = JsonConvert.SerializeXmlNode(xmlDoc, Newtonsoft.Json.Formatting.Indented, true);

There are a few lines of codes there dedicated to dealing with some quirks in the process. Namely if you do not remove both the potentially pre-pended “xml” processor directive node, or the namespace attributes on the root node, these will show up in the JSON output – yikes!

Creating Schemas for JsON Requests and Responses

Assuming that the data will be in XML form whenever BizTalk gets a hold of it, how can we generate a representative schema without much pain? Our best bet will be simply taking some JSON, throwing it through the same process above, and then generating a schema from the resulting XML.

If you haven’t already, you will need to install the Well Formed XML schema generator by heading over to the C:\Program Files (x86)\Microsoft BizTalk Server 2013\SDK\Utilities\Schema Generator folder and then running the InstallWFX.vbs script that you will find there:

image

In order to make it easier to run some JSON formatted data through this process, I created a quick no-frills sample app that can do the conversion and save the result as an XML file:

image

Once we have the output of that application, we can run it through the schema wizard, and end up with a schema that looks somewhat like the following (for which much polishing is required with regards to the selection of data types):

image

Dealing with Cross-Site Requests

Modern browsers will be quite leery of allowing for cross-domain requests (i.e., requests originating from a site on one domain which target a service on a different domain). That is not to say, however, that every instance of this is inherently bad.

One of the things that I wanted to make sure of, is that I could publish a service which would allow for requests originating from a different domain. There are two common approaches for allowing this to happen.

One of them is handled at the http level through specialized headers that are passed through a series of requests to ensure that a service is willing to be called from a different domain. This method is known as CORS or Cross-Origin Resource Sharing.

The other method is much more hacky and really ought to be reserved for those cases where the browser can’t cope with CORS. This other method is JSONP (JSON with Padding).

Examining CORS

Let’s start this whole discussion by taking a look at CORS first. Assume that we have the following code:

var request = $.ajax({
	url: $("#serverUrl").val(),
	type: 'POST',
	contentType: 'application/json',
	data: JSON.stringify(data),
	processData: false,
	dataType: 'json'
})

Now assume that the item on the page with the id serverUrl contains a URL for a completely external site. If that is the case, and you’re using a modern browser that supports CORS, you will see something like this if you watch your HTTP traffic:

image

Before the actual request is sent (with the data posted in the body), a simple OPTIONS request is issued to determine what is possible with the resource that is about to be requested. Additionally it includes Access-Control-Request headers that indicate which HTTP Method is desired, and which headers are desired (to be sent). Here the server must respond in kind saying that it allows requests from the origin domain (notice the origin header above calling out the site originating the request), and saying that it supports the Method/Headers that the origin site is interested in.

If it does not respond in the correct manner, the conversation is over, and errors explode all over the place.

Getting back to the BizTalk world for a second, if your REST Receive Location isn’t expecting this, you will see this lovely response:

image

Technically, you wont’ see any Access-Control headers at all in the response (I took the screenshot after un-configuring some things, but apparently I missed that one). Ultimately the error comes down to a contract mismatch. More specifically it will read something like this:

The message with To ‘http://your.service.url.here.example.org/service.svc/operation cannot be processed at the receiver, due to an AddressFilter mismatch at the EndpointDispatcher.

This OPTIONS request actually needs to become part of your contract for each endpoint for which you wish to allow cross-domain communications. Additionally, you have to be ready to provide a happy and valid response for this message.

Configuring IIS to Send CORS Headers

Ultimately there’s a few ways we could inject the headers required to make CORS happen. I’m going to go with having IIS inject those headers at the server level (rather than doing it at the adapter configuration level). I am going to do this at the level of the virtual directory created by the WCF Service publishing wizard when I published my REST endpoint. Examining the Features View for that virtual directory we should find HTTP Response Headers under the IIS group:

image

From there, I can begin adding the headers for those things that I would like to allow:

image

Above, I’m setting a header which indicates that I wish to allow requests from any origin domain. After some mucking around with fiddler to find out what my little test page would actually be requesting using IE, I ended up with this set of headers that seemed to make the world happy:

image

Pesky Empty Messages

Even with the correct headers being sent back in the response, the response code will still be 500 until we can properly deal with the incoming empty OPTIONS message, and ensure that we are routing a response back such that a 200 response code is returned.

In BizTalk Server the easiest way to get a message from the receive pipeline to route back to the send pipeline is through the RouteDirectToTP property. This is the same property that ensures that EDI Acknowledgements are routed directly back to the trading partner on the send pipeline of the receive port, and the same property used by the ESB Toolkit to create the Forwarder pipeline component (short circuiting a default instance subscription for messaging-only request/response service requests, and restoring it after multiple requests have been made).

So how are we going to use it here? Well we’re going to detect if we’re dealing with a CORS message in the pipeline. If we are, then we are going to promote that property with a value of “true” to indicate that we want that empty message to flow right back to the response pipeline as the empty body of our lovely 200 OK response. The code for that looks a little something like that (use your imagination or github for the values of those constants):

#region Handle CORS Requests

// Detect if the incoming message is an HTTP CORS request
// http://www.w3.org/TR/cors/
            
// If it is, we will promote both the RouteDirectToTP property and the
// EpmRRCorrelationToken so that the request is immediately routed
// back to the send pipeline of this receive port

object httpMethod = null;
httpMethod = inmsg.Context.Read(HTTP_METHOD_PROPNAME, WCF_PROPERTIES_NS);

if (httpMethod != null && (httpMethod as string) == OPTIONS_METHOD)
{
	object curCorrToken = inmsg.Context.Read(EPM_RR_CORRELATION_TOKEN_PROPNAME, SYSTEM_PROPERTIES_NS);
	inmsg.Context.Promote(EPM_RR_CORRELATION_TOKEN_PROPNAME, SYSTEM_PROPERTIES_NS, curCorrToken);
	inmsg.Context.Promote(ROUTE_DIRECT_TO_TP_PROPNAME, SYSTEM_PROPERTIES_NS, true);

	var corsDoc = new XmlDocument();
	corsDoc.AppendChild(corsDoc.CreateElement(DEFAULT_PREFIX, CORS_MSG_ROOT, JSON_SCHEMAS_NS));
	writeMessage(inmsg, corsDoc);

	return inmsg;
}

#endregion

The reason that we are bothering to create an XML document out of this empty message is that this decode component will be followed by an XML Disassemble component that will need to recognize the data. By the end of this post we will have two special case XML Documents for which we will create schemas. In the case of this CORS message, we will need to ensure that the JsonEncoder returns this XML as an empty response body. The following code takes care of this other side of the equation:

#region Handle CORS Requests

// Detect if the incoming message is an HTTP CORS request
// http://www.w3.org/TR/cors/

object httpMethod = null;
httpMethod = inmsg.Context.Read(HTTP_METHOD_PROPNAME, WCF_PROPERTIES_NS);

if (httpMethod != null && (httpMethod as string) == OPTIONS_METHOD)
{
	// Remove the message body before returning
	var emptyOutputStream = new VirtualStream();
	inmsg.BodyPart.Data = emptyOutputStream;

	return inmsg;
}

#endregion

At this point we should be able to receive the options message through the receive pipeline, and have it property route back through the response pipeline. Beyond that, we just need to make sure that we pair our actual resources, within the operation mapping of the adapter config, with mappings for the CORS request:

<Operation Name="CrossDomainCheck" Method="OPTIONS" Url="/ItemAverageRequest" />
<Operation Name="ItemAverageRequest" Method="POST" Url="/ItemAverageRequest" />

Examining JSONP

Well that was one way that we could allow for cross-domain requests – requests that allowed for the POSTing of lots of wonderful data. But what if we just want to get some data, we don’t want to waste HTTP requests, and we’re not using a browser that supports CORS (or maybe we just want to live dangerously)? There is an alternative – JSONP.

Essentially this is a cheap hack that got its own acronym to legitimize how terrible of an idea it actually is.

I think the best way to explain this one is to examine a sample code snippet, and then the conversation that we want to get out of it:

var request = $.getJSON($("#serverUrlJsonp").val() + "?callback=?");

Notice the ?callback=? portion of that snippet. It’s appending a querystring parameter named callback to the URL with a value to be filled in by jQuery. So what will it fill in? A randomly generated function for receiving data back from the server. But how is the server going to execute local code within the browser you might ask? Let’s take a look:

image

jQuery seems to populate the callback querystring parameter with a suitably random value. Check out what happens in the response. The response is a function call to that randomly generated callback function. The way JSONP is cheating the system to get a cross-domain request working is by injecting a script node into the DOM with the source set to the service (passing the callback name in the querystring). Then as the service responds, the response is executed as JavaScript code. This is definitely one where you’re putting a lot of trust in the service.

So how do we make it work with BizTalk?

Capturing the JSONP Callback Name on Receive

JSONP’s ability to work depends on the ability of the service to craft a call to the callback method named by the callback querystring parameter. The easiest way to capture this is through an operation mapping combined with variable mapping:

<Operation Name="GetItemListJsonp" Method="GET" Url="/ItemsList?callback={callback}" />

image

Essentially what we’re doing here is relying on the adapter to promote the callback function’s name to the context of the message, so that it will be available as we generate the response JSON.

Pipeline Component Changes to Deal with JSONP

In order to deal properly with JSONP, we will have to make some minor modifications to both the JsonDecoder and the JsonEncoder pipeline components. Starting with the JsonDecode component which will be executing upon initial receipt of our request message, we will add the following code:

#region Handle JSONP Request

// Here we are detecting if there has been any value promoted to the jsonp callback property
// which will contain the name of the function that should be passed the JSON data returned
// by the service.

object jsonpCallback = inmsg.Context.Read(JSONP_CALLBACK_PROPNAME, JSON_SCHEMAS_NS);
string jsonpCallbackName = (jsonpCallback ?? (object)string.Empty) as string;

if (!string.IsNullOrWhiteSpace(jsonpCallbackName))
{
	var jsonpDoc = new XmlDocument();
	jsonpDoc.AppendChild(jsonpDoc.CreateElement(DEFAULT_PREFIX, JSONP_MSG_ROOT, JSON_SCHEMAS_NS));
	writeMessage(inmsg, jsonpDoc);

	return inmsg;
}

#endregion

This code is examining the callback context property to determine if there is any value promoted. If there is, it is assumed that this is a JSONP request. It also assumes that all JSONP requests will have an empty body, and as such will require a placeholder node in order to make it through the following XmlDisassembler.

On the response side, these are the modifications made to the JsonEncoder component:

#region Handle JSONP Request

// Here we are detecting if there has been any value promoted to the jsonp callback property
// which will contain the name of the function that should be passed the JSON data returned
// by the service.

object jsonpCallback = inmsg.Context.Read(JSONP_CALLBACK_PROPNAME, JSON_SCHEMAS_NS);
string jsonpCallbackName = (jsonpCallback ?? (object)string.Empty) as string;

if (!string.IsNullOrWhiteSpace(jsonpCallbackName))
	jsonString = string.Format("{0}({1});", jsonpCallbackName, jsonString);

#endregion

Here we’re jumping in at the last minute, before the JSON formatted data is returned, and wrapping the whole thing in a function call – if the callback function name exists in the context.

Building a Simple Proof of Concept Service

After burning an evening on these components, I decided it would only be worthwhile if I could actually use them to do something interesting.

The parameters I set out for myself was that I have to have a service that could be called cross-domain from a simple .html page with only client-side JavaScript goodness. As a result I resurrected my dual listener orchestration anti-pattern from one of my previous posts (a hijacked convoy pattern?) to create the following monstrosity:

image

So what’s going on here, and what does this service allow me to do? Well it exposes two resources, (1) ItemsList and (2) ItemAverageRequest. They both deal in “items”. There really is no meaning for what an item is, I just needed some thing to test with, and item was a nice fit.

The first resource, ItemsList is something that you can GET. As a result it will return a nice JSON formatted list of “items” each having an “amount” of some kind and a unique “id”.

The ItemAverageRequest resource is a place where you can POST a new ItemAverageRequest (really this is just a list of items exactly as returned by the first resource), and as a result you will receive the cumulative average “amount” for the items in the list.

The maps in the orchestration shown above ensure that these resources have the described behavior.

Testing the Service

So did all of this work pay off, and give us happy results? I built a simple test client to find out. Here’s what the JSONP request looks like against the ItemsList resource (the textarea in the background contains the raw response from the service):

image

The raw HTTP conversation for this request was actually shown above in the Fiddler screenshot from the JSONP discussion.

Leaving that data in the textarea as input, and then invoking the POST against the ItemAverageRequest resource yields the following:

image

The HTTP conversation for this request happened in two parts, represented below:

image

image

Summary

I really hope I was able to add something to the conversation around using JSON formatted data with the WCF-WebHttp adapter in BizTalk Server 2013. There’s certainly more to it than just getting the happy XML published to the MessageBox.

Was a pipeline component the right choice to handle protocol level insanity like CORS? Probably not, that part of it is something that we should have ultimately implemented by extending WCF and having the adapter deal with it — especially since CORS really doesn’t have anything to do with JSON directly. I’ll leave that to the commenters and other BizTalk bloggers out there to consider.

However, with JSONP, it becomes ultimately an output encoding issue, and since we had already dealt with one cross-domain communications issue in the pipeline, it was natural to handle CORS as well in the same fashion.

Before I sign off for the remainder of the week, I want to take this time to remind you that if you would like to take your BizTalk skills to the next level with BizTalk Server 2013, you should check out one of our upcoming BizTalk Server 2013 Deep Dive Classes, where you will learn all about fun stuff like custom pipeline component development and have an opportunity to get hands on with some of the new adapters!

If you would like to access sample code for this blog post, you can find it on github.

Why Updated Platform Support Matters

This post is the fifth in a weekly series intended to briefly spotlight those things that you need to know about new features in BizTalk Server 2013.

Each new version of BizTalk brings with it support for the latest Microsoft Operating System, Database Engine, and Integrated Development Environment. It is listed as a new feature each time, and many tend to skip over that feature on the new features list. However, this is actually very powerful – especially when considering the change over time in those underlying platforms.

In this post, I’m going to examine how the Microsoft Fakes framework (found in Visual Studio 2012.2 Premium) can enable interesting testing scenarios for code that wasn’t built specifically for test. In the process, I’m also going to be using the Winterdom  PipelineTesting library – which is easily upgraded to 2013 with some re-referencing of BizTalk assemblies.

Testing the Un-Testable

The Microsoft Fakes framework (available in Visual Studio 2012 Ultimate, or 2012.2 Premium edition and above) provides the ability to create both stubs and shims of any .NET interface or class respectively. While the stubs capability is pretty neat, they don’t feel nearly as magical as the shims.

Consider that you have developed a pipeline component with a little bit of logic to inject a context property based on the current date/time. The scenario (although contrived) that I have come up with is that of an order processing system in which rush orders are all received through the same receive location. In that location is a custom pipeline that must promote a property to route to the warehouse that will be able to get the order picked and shipped the quickest (based on which warehouse is picking orders at the current hour, or will be doing so next). When testing the code, we will need to verify the behavior of the code at specific times.

While we could write the code in such a way that the retrieval of the current date/time was done through a special date/time provider class (which implements some interface specialized to that purpose), the logic is being re-used from another location – one which will not be modified.

BizTalk Server 2013 will rise to this occasion due to relying on Visual Studio 2012 as its development environment. We can use the included Microsoft Fakes framework to create a shim that will inject logic for all calls to DateTime.UtcNow to return a fixed DateTime object of our choosing.

getting Started with Shims

In order to get started, we need to instruct Visual Studio to generate some helper classes for the assembly or assemblies of our choosing. These classes will enable us to generate Stubs/Shims for classes within these assemblies.

For our solution, we will be working with the DateTime class, found in System. We can add a fakes assembly in two clicks via the context menu:

AddFakes

After adding the assembly, our project, and its references, look like this:

AfterAddingFakes

Now we will have to write the code in such a way that calls to DateTime.UtcNow will return the value of our choosing. In order to hook into those calls only within a specific portion of code, we will need to create a ShimsContext. Because the ShimsContext class implements IDisposable, we are able to wrap a using statement around it to provide better visibility of the scope it will control.

        [TestMethod]
        public void RushOrderMarker_FirstPartOfDay_FirstWarehouseReturned()
        {

            // Arrange
            var pipeline = GeneratePipeline();
            var testMessage = GenerateTestMessage();

            DateTime fakeTime = new DateTime(
                                    year: 2013,
                                    month: 01,
                                    day: 01,
                                    hour: 1, minute: 0, second: 0);

            using (ShimsContext.Create())
            {

                System.Fakes.ShimDateTime.UtcNowGet = () => { return fakeTime; };
                var expectedId = WarehouseIdentifiers[0];

                // Act
                var output = pipeline.Execute(testMessage);

                // Assert
                var warehouseId = output[0].Context.Read(WAREHOUSE_ID_PROPERTY_NAME, WAREHOUSE_ID_PROPERTY_NAMESPACE);
                Assert.AreEqual(expectedId, Convert.ToString(warehouseId), "Incorrect warehouse id returned");

            }

        }

 

Here we assign logic directly to System.Fakes.ShimDateTime.UtcNowGet that will execute when any code within the scope after ShimsContext.Create() was called attempts to call DateTime.UtcNow. We then submit a message through our pipeline and verify that the correct warehouse id was promoted into the context of the message (based on the current time).

Know your Tools

If you’ve just made the move up to BizTalk Server 2013, it really is important that you not only know what feature set you have in the platform, but also what is available to you in the underlying platform, and the tools you use to create your applications.

If you want to learn BizTalk Server 2013 in the environment it was designed for, check out one of our upcoming BizTalk 2013 Developer Immersion, or BizTalk Server 2013 Deep Dive classes. The hands-on-lab environment is running the latest and greatest versions of Windows Server, SQL Server, and Visual Studio, so that you have the optimal BizTalk Server development experience.

If you would like to access sample code for this blog post, you can find it on github.

Windows Azure Service Bus Queues + BizTalk = Out of Body Experiences

This post is the third in a weekly series that will highlight things that you need to know about new features in BizTalk Server 2013. One of the major new features of BizTalk Server 2013, is the native ability to integrate with Windows Azure Service Bus through the new SB-Messaging adapter. This week we will be covering what can happen if you’re trying to integrate with applications that are loading up your Windows Azure Service Bus Queues with Brokered Messages made purely of properties.

Out of Context

Context properties are not a foreign concept to BizTalk developers. Indeed, BizTalk messages themselves are made up of both a message context and body parts. The same is true to some extent of Service Bus Brokered Messages – though the implementation differs greatly.

To deal with context properties of Service Bus Messages, the SB-Messaging adapter provides us the ability to promote these properties to the context of the BizTalk message generated upon receipt of a Brokered Message. The adapter configuration that controls this behavior looks like this:

sb-promote-props

One major difference between the properties of a Brokered Message and the context properties of a BizTalk message, is that BizTalk message context properties have a namespace to disambiguate properties that share the same name. Brokered messages have no such thing, and no concept of a property schema for that matter (which allows BizTalk to have an awareness of all possible properties that can exist in the context).

The adapter configuration setting pictured above is a bridge between those worlds that allows the Brokered Messages properties to be understood by BizTalk Server. Of course, you will have to define a property schema for those properties on the BizTalk side. Here’s an example of a really simple one that defines two custom properties:

propertyschema

All of this will function pretty perfectly – assuming that the message sent to the Queue has a body.

Out of Body

At last year’s TechEd NA conference in Orlando, one of the presenters (I believe it was Clemens Vasters) was discussing what Brokered Messages were, and how they were exchanged. In the talk, he mentioned that it was perfectly fine, and even preferable in some cases, to create a Brokered Message that was purely properties without any body content. Certainly this was a light-weight operation that relied only on HTTP headers, and required no overhead for serialization of body content. And at the time, I accepted this, and was happy.

As a result, a lot of my code interacting with the Service Bus involved snippets like this:

var client = Microsoft.ServiceBus.Messaging.QueueClient.CreateFromConnectionString(connString, queueName);
var message = new Microsoft.ServiceBus.Messaging.BrokeredMessage();
message.Properties.Add("CustomProperty", "Value");
message.Properties.Add("CustomProperty2", "Value2");
client.Send(message);

Notice here, we’re just passing around properties, and not dealing with a message body. That’s fine when I’m receiving the message in .NET code living somewhere else, but when moving into the BizTalk world, it became problematic. When attempting to receive such a message (even with the property schema deployed, and everything else otherwise happy), I would receive this (using the PassThruReceive pipeline):

errormsg

For those trying to use Google-fu to solve the same problem, the error message reads:

“There was a failure executing the receive pipeline: ‘Microsoft.BizTalk.DefaultPipelines.PassThruReceive, Microsoft.BizTalk.DefaultPipelines, Version=3.0.1.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35” Source: “Pipeline” Receive Port: “WontMatchWhatYouHaveAnyway” URI: “sb://namespacehere-ns.servicebus.windows.net/somequeuenamehere” Reason: The Messaging Engine encountered an error whlie reading the message stream.

If you came to this article because this error message appeared, and you’re not trying to receive an empty message from Azure Service Bus, just head right back to the Bing results page, and check the next result. Everyone else, stick with me.

Let’s take a step back for a second. BizTalk Server 2013 is designed for integration between systems. I typically should not be adjusting my behavior outside of BizTalk to work with BizTalk. So here I want to take the same approach.

What are our options here? Well clearly we have a failure in the pipeline while processing an entirely empty message body, which doesn’t allow us to get the point where we are processing a map. That removes our ability to use the excellent ContextAccessor functoid to pull the properties into the body of the message upon receipt.

Instead we are left with building a custom solution.

PropertyMessageDecoder Pipeline component

What is this custom solution I speak of? For now, I’m calling it the PropertyMessageDecoder pipeline component. It is a custom pipeline component that creates a message body based on the context properties of a BizTalk message. If you already have a message body, this pipeline component will not help you, since it blindly overwrites the body if one exists. Instead, this is a purpose-built pipeline component for the specific scenario addressed by this blog post.

I started development of this component by using Martijn Hoogendoorn’s BizTalk Server Pipeline Component Wizard. Unfortunately, it hasn’t yet been updated to work with BizTalk Server 2013, so I had to first create/upload a patch for BizTalk 2013.

Once I was able to create a custom pipeline component project, I created a simple pipeline component that took the input message, and dumped the context properties into the message bodies. I also included some configurable properties that allowed one to opt-out of properties from the BizTalk system properties namespace, or to filter out all properties except those from a specific namespace (especially helpful in this scenario).

Here’s the bulk of the Execute method for that component:

public Microsoft.BizTalk.Message.Interop.IBaseMessage Execute(Microsoft.BizTalk.Component.Interop.IPipelineContext pc, Microsoft.BizTalk.Message.Interop.IBaseMessage inmsg)
{
    Stream dataStream = new VirtualStream(VirtualStream.MemoryFlag.AutoOverFlowToDisk);
    pc.ResourceTracker.AddResource(dataStream);

    using (XmlWriter writer = XmlWriter.Create(dataStream))
    {
        // Start creating the message body
        writer.WriteStartDocument();
        writer.WriteStartElement("ns0", ROOT_NODE_NAME, TARGET_NAMESPACE);

        for (int i = 0; i < inmsg.Context.CountProperties; i++)
        {
            // Read in current property information
            string propName = null;
            string propNamespace = null;
            object propValue = inmsg.Context.ReadAt(i, out propName, out propNamespace);

            // Skip properties that we don't care about due to configuration (default is to allow all properties)
            if (ExcludeSystemProperties && propNamespace == SYSTEM_NAMESPACE) continue;
            if (!String.IsNullOrWhiteSpace(CustomPropertyNamespace)
                    && propNamespace != CustomPropertyNamespace) continue;

            // Create Property element
            writer.WriteStartElement(PROPERTY_NODE_NAME);

            // Create attributes on Property element
            writer.WriteStartAttribute(NAMESPACE_ATTRIBUTE);
            writer.WriteString(propNamespace);
            writer.WriteEndAttribute();

            writer.WriteStartAttribute(NAME_ATTRIBUTE);
            writer.WriteString(propName);
            writer.WriteEndAttribute();

            // Write value inside property element
            writer.WriteString(Convert.ToString(propValue));
            writer.WriteEndElement();
        }

        // Finish out the message
        writer.WriteEndElement();
        writer.WriteEndDocument();
        writer.Flush();

    }

    dataStream.Seek(0, SeekOrigin.Begin);
    var outmsg = Utility.CloneMessage(inmsg, pc);
    outmsg.BodyPart.Data = dataStream;

    return outmsg;
} 

The output will end up conforming to this schema:

propertymessageschema

And will look a little something like this:

<?xml version="1.0" encoding="utf-8"?>
    <ns0:PropertyMessage xmlns:ns0="http://schemas.quicklearn.com/PropertyMessage/2013/06/">
        <Property Namespace="http://schemas.example.org/" Name="CustomProperty1">Value1</Property>
        <Property Namespace="http://schemas.example.org/" Name="CustomProperty2">Value2</Property>
    </ns0:PropertyMessage>

When included in a pipeline alongside the XML disassembler component, we are finally able to take those pesky “empty” messages from Azure Service Bus and process them as XML documents. The message context properties are still retained in the context for routing, but our custom properties are also made available for use directly in maps (through the liberal use of Value Mapping (Flattening) and Logical functoids), and further processing.

To follow this through to the end, I created a quick proof-of-concept map:

samplemap

Essentially we’re looping through the properties from the context (hence using the Value mapping flattening functoid) and only mapping the value stored inside the Property node whenever the namespace and name of the property match those that we care about. Since the namespace for all of our custom properties is the same, we have a single Equals functoid that we’re sharing across all (two) properties.

After processing a message all the way through the bus, custom pipeline, and map, this should result in this beauty as opposed to a nasty error message:

sampleoutput

un-Borked brokered messages

Remember, that none of this stuff is necessary if you’re working with Brokered Messages that have a message body. You only need to go this deep when you’re dealing with Brokered Messages made purely of properties (which is a recommended practice when possible).

By bringing out-of-the-box first class support for Windows Azure Service Bus Queues, BizTalk Server 2013 continues to prove that with a solid and extensible architecture in place, any type of integration can be made possible.

If you would like to learn more about extending BizTalk Server 2013 to meet your integration challenges, check out one of our upcoming BizTalk 2013 Developer Deep Dive classes.

If you would like to access sample code for this blog post, you can find it on github.