Mapping Code Pairs in the WABS Mapper

This post is the twenty-third in a weekly series intended to briefly spotlight those things that you need to know about new features in BizTalk Server 2013.

Assuming that one of the major benefits of using Windows Azure BizTalk Services is being able to offload some of our EDI processing to the cloud, then a major use case for the new WABS mapper becomes EDI mapping. One of the fun tasks that comes out of that is dealing with code pairs (if you’re interested in the on-premise side of this problem, the best coverage to date is found here). In this post, I’m going to examine how one might approach this problem using the new tooling available in the WABS SDK.

Give Me an Example of Code Pairs

Maybe you’re reading this and thinking to yourself, “I don’t really care about EDI, but WABS looks interesting,” and you still want to be able to gain something from this discussion without getting buried in the esoterica that is EDI. If that’s the case for anyone out there, here’s an example of the problem I’m going to try to solve using the WABS mapper.

We have an input that is a repeating record of items where there is a qualifier that describes that type of item we’re looking at, and a value that provides the value of the item. Consider this schema:

image

Here we have (at some point in the schema) a list of phone numbers, identified by a Type field element (our qualifier) and the Number field element (our value). The Type field is defined as follows:

image

We have an output that is a flat listing of items where we have a set of fields representing a few of these different types of numbers:

image

So what we are trying to do is iterate through the list of numbers in the input, map over the value for the item with the qualifier Mobile for the CellNumber output, and grab the value of the Home qualified number and toss it into the HomeNumber output field. It’s something that looks trivial, and yet is annoying to do en masse using the BizTalk Mapper:

image

In a Land Without FLATTENING Value MAPPERS

So what does this look like when brought over to the land of WABS using .trfm style maps? Well, looking into the toolbox, we do have an item called Conditional Assignemnt, which is essentially a Value Mapper:

image

If we run with that (combined with some Logical Expression map operations, we end up with something like this:

image

Unfortunately, this isn’t a Value Mapper (Flattening) type map operation. As a result, this actually will only work if we are interested only in a single qualifier, and that qualifier happens to show up in the first code pair that appears in the document.

So what do we do now? Do we reach to XSLT and code it up by hand (after-all, it is super easy to pull off)? Let’s take a look at how the migration tool that comes with the SDK handles this scenario:

image

It adds two MapEach operations to the map, that are configured something like this:

image

As you can see, the MapEach was made to be conditional upon the Type field element being compared to a constant string input. However, this conditional check is repeated inside the MapEach operation within the chained Logical Expression and Conditional Assignment map operations. While the map technically works as generated by the tool, it looks horrible, and it really feels like there ought to be a better way.

Another Approach

If you don’t want to use the MapEach approach (which really wouldn’t be so bad if our two nodes weren’t hanging out at the root as siblings), then we can take advantage of the ability of the new mapper to generate lists of items in memory while working through the mapping (really this is kind of similar to the Table Looping and Extracting functoid).

In order to make that happen, we use the Create List map operation, and then at some point inside include the Add Item operation to add items to the list. The Create List operation is configured to include the names of the fields of each item in the list. It then includes the ForEach operation with a nested Add Items. The ForEach is pointed to by the parent repeating node of our code pairs. The Add Items operation is pointed to by each value we want to include for each source item:

image

Here’s what the configuration of the Create List map operation looks like:

image

Once we have the list created, we can select values out of the list using the Select Value operation (in fact we’ve used two of them):

image

Does it Work?

Technically, both approaches worked, and took this input, and generated the output shown below it:

image

image

I’m going to be honest here, it still feels comparatively clunky, and I’m probably going to be reaching for that XSLT override pretty frequently. That is, unless there’s yet another way that’s cleaner and more simple than those presented thus far (this is one of those posts that I hope to revisit in the future with a correction showing an even better way to approach the problem).

Next Week

Next week, I will be wrapping up this series and shifting focus back to day-to-day BizTalk Development and all the fun things that we’re actively working on there.

Until then, happy mapping with shiny new tools!

Building Blocks of Windows Azure BizTalk Services

This post is the twenty-second in a weekly series intended to briefly spotlight those things that you need to know about new features in BizTalk Server 2013.

Since the last BizTalk Summit, there has been a lot of coverage within the greater BizTalk Community of Windows Azure BizTalk Services – the Azure-backed offering that brings pieces of the core BizTalk Server functionality to the cloud. This is something that QuickLearn had early opportunity to get hands-on with, and also something for which we were able to create labs (even before public availability). At the same time, it’s something that we haven’t really written about until now.

This week, rather than going through specific how-to guidance for pieces of the offering, I instead want to take a step back and look at the offering as a whole, so that we all understand the pieces that are there (and can see the forest through the trees). If you want to follow along, go ahead and download the Windows Azure BizTalk Services Preview SDK.

Once installed, you’ll find that Visual Studio has been juiced up with two additional project templates found under the BizTalk Services heading:

image

The first project type BizTalk Service allows you to define Itineraries (not in the ESB sense, though they do display a complete message flow, much like ESB), and the configuration for Bridges (much like a BizTalk Pipeline).

The second project type BizTalk Service Artifacts allows you to define schemas and maps (.trfm files, though you can convert your existing .btm files with some limitations using a tool provided at the SDK download link). The mapper is something I’m not going to be discussing any further, and as a result I will instead recommend this excellent blog post on the subject by Glenn Colpaert, and then also the official documentation on the conversion process/limitations.

So we can create these Itineraries, configure these Bridges, and deal with Schemas and Maps, but how does this all fit together within the context of Windows Azure BizTalk Services? To understand the answer to that, we have to first address the three things that Windows Azure BizTalk Services is trying to do:

  • Rich Messaging Endpoints (Itineraries + Bridges + Maps)
  • BizTalk Adapter Service (Service Bus Relay + Local Helper Service + BIzTalk Adapter Pack)
  • Business to Business Messaging (EDI Schemas + All of the Above)

Rich Messaging Endpoints

image

I’m going to start with the weirdest of the bunch – Rich Messaging Endpoints. The developer experience of this is similar (to a point) to defining an ESB Toolkit itinerary, but only inasmuch as you’re seeing the entire message flow within a single diagram. Here you can define certain sources (FTP/SFTP) for messages, and have them routed through Bridges (similar to Pipelines), and then routed out to certain destinations (for which CBR can be used to route the messages to one or multiple destinations).

In order to specify the settings necessary to connect to those destinations, you often find yourself within a configuration file defining WCF related binding settings directly:

image

Double-clicking on a bridge within the itinerary will bring up a semi-clone of the Pipeline Designer with a fixed set of components in place (as well as a location to specify maps to execute directly as part of the pipeline):

image

In terms of integration patterns, the bridges here are giving us a subset of the VETRO pattern (namely the VETR part of it): Validate, Enrich, Transform and Route. Where is the Route part of the equation you might ask? Well, you’ll actually find it in the Properties window for the bridge, by clicking the button in the Route Ordering Table property:

image

Here, we can do context-based routing that helps determine the destination for each message coming from the destination (though here, I have only connected a single destination endpoint).

BizTalk Adapter Service

image

What happens if the list of destinations doesn’t suit me? What if I want to take information from an SFTP drop, transform it into something that I could use to generate a record within a table in SQL, and then directly insert that record into my on-premise system? In that case, I’ll find myself reaching to the BizTalk Adapter Service feature.

This one has a nice list of dependencies that need to be in place before you can even use it (e.g., Windows Server AppFabric, BizTalk Server Adapter Pack), but once you have those, it’s fairly straight-forward to setup.

What it’s really providing is a single WCF endpoint that is exposed over a Service Bus Relay endpoint (and thus accessible from anywhere in the world – even if hosted behind a fairly strict firewall). This single endpoint can be passed messages destined for any number of internal systems that you can setup through the Server Explorer interface within Visual Studio. The BizTalk Team Blog actually had a pretty decent article on the topic back in June that sadly generated 0 comments.

Essentially, this is allowing you to bring your LOB systems into the mix to play along with everything else already mentioned (assuming those LOB systems have WCF adapters included in the BizTalk Adapter Pack).

Business to Business Messaging

The final capability that WABS is bringing to the table is B2B messaging (i.e., EDI). We have the ability through a special portal (still in preview) within the Windows Azure Management interface to create Parties and Agreements a la BizTalk Server on-prem. In fact, the same exact schemas are used to define the X12 messages, so if you’ve already had to do some schema customizations on a per-partner basis for in-house integrations, those same changes can now be brought to the cloud.

Pulling it All Together

Is this replacing BizTalk Server on-premise as we know it? Not quite. We have a lot of the same pieces: Transport (Limited)/Translation (Limited)/Transformation and Context-based Routing (Content-based Routing when using the Enrich stage of the Bridge). We are missing more complete process orchestration with exception handling / transactions / compensation (Orchestrations). We are missing rule-driven injectable logic (Business Rule Engine), among other things (though I’m stopping the list here to avoid debate about similar (but not quite) functionality in Azure in general).

So what do we do with this? Use it to solve integration problems that use a hybrid between cloud-based and on-premise resources (e.g., http://msdn.microsoft.com/en-us/library/windowsazure/hh859742.aspx). Use it when an on-premise BizTalk installation would be overkill for the task, but it can be done happily using the tools available in WABS. Use it to take over the most taxing parts of your B2B processing, leaving BizTalk Server on-premise to do those things it’s great at.

Ultimately, it’s up to you. BizTalk Server 2013 and Windows Azure BizTalk Services together give you the power to decide how to break up your integration work, and how you approach the integration challenges that are thrown your way.

If you haven’t yet given the SDK a download yet, go do it. Though the service is still in preview, the SDK has been available for quite some time now, and great things are right around the corner – so keep your skills sharp.

That’s all for now!

BizTalk Server 2013: Eliminating the Dependency on SQL Server 2005 Notification Services

This post is the twenty-first in a weekly series intended to briefly spotlight those things that you need to know about new features in BizTalk Server 2013.

Over the last few years, I’ve had my fair share of times describing to people the installation/configuration process for BizTalk Server and more specifically the Business Activity Monitoring component. During those discussions, the oddball requirement for SQL Server 2005 Notification Services comes up and sparks a mild level of controversy: “Why would I need a component from an older version of SQL Server to use BAM Alerts?” Thankfully, with BizTalk Server 2013, I won’t be having that discussion as frequently.

It Looks Like You’re Writing a Letter, Do You Want Some Help?

BizTalk Server 2013 is now able to take advantage of SQL Server 2012’s Database Mail feature to send BAM Alerts. It’s pretty straight-forward to get all setup. You start by taking a trip over to SQL Server Management Studio, and choosing Configure Database Mail on the item of the same name under the Management folder in Object Explorer:

image

From there, you walk through a fairly straight-forward wizard to create a new profile and specify the SMTP server / credentials to use to send email:

image

While we specify the account here as bam@quicklearn.com – that’s ultimately NOT the address the alert will be delivered FROM (which instead will be bam@microsoft.com):

image

As a quick test (of both the settings entered throughout the remainder of the wizard, and the local SMTP server), I used the Send Test E-Mail feature to ensure all was happy with the setup (but this step isn’t 100% necessary):

image

And this email was the result:

image

What Does the BAM Side of Configuration Look Like?

Assuming you’ve already gone through the standard BizTalk Server Configuration wizard and enabled/configured all of the BAM related functionality, it becomes a matter of deploying the activity/views you’re interested in, and then enabling alerts for those views:

image

From there, the BAM Portal will expose the alert options within the UI. As an example, I created a quick BAM View that incorporated the concept of processing time – with the idea that if even a single item took too long to process, an alert would be triggered.

For an alert like this, the activity search page is a fine place to configure the parameters. Here we start with a query as if we are searching for an item where the processing time is greater than 5 seconds, but instead of executing the query directly, you can click Set Alert to use the query criteria as the basis for an alert:

image

After you click the button, it directs you to a page where you can define what the message looks like, and then after that is setup, to add subscribers for the alert. The subscribers can be either email boxes that will receive the notification, or a file share:

image

Prove That This Works

In order to follow through all the way (without SQL Server 2005 Notification Services), I quickly threw together some code to populate the activity with some data that should satisfy the alert condition, and cause the alert to fire:

image

After executing the code, I found this beauty sitting in the drop folder ready for delivery:

image

BAM!

So what are the limitations here (because there’s always a catch right)? Well, you can take this approach provided that you are using SQL Server 2012 along with BizTalk Server 2013. If you’re using SQL Server 2008 R2, then we still have to have that awkward discussion, where I link you over to the download page for the Feature Pack for SQL Server 2005 SP4 and then die a little inside.

BizTalk Server 2013 is trying to do things right, and make some of these stories cleaner, and more logical. There will be 3 more posts in this series to finish driving that home, and complete the grand tour of the new functionality. From there, we will look to development / course development battle stories, and also vNext functionality as information becomes available.

Until then, stay tuned!

BizTalk Server 2013 Ordered Delivery Improvements

This post is the twentieth in a weekly series intended to briefly spotlight those things that you need to know about new features in BizTalk Server 2013.

Almost buried in the middle of the new features list, you will find that for those in the healthcare industry, Microsoft has finally delivered on a longer-term ask in the form of improvements to the handling of ordered delivery in BizTalk Server 2013. The ask was for better performance when using the MLLP adapter, but in reality – anyone who relies on ordered delivery with more than a single Send Port can benefit and see anywhere from 300-500% better performance.

This is the glimpse into the improvement that we get from the What’s New article:

In previous BizTalk versions with scenarios where ordered delivery uses multiple endpoints, the slowest adapter type is the maximum speed. This behavior directly impacts HL7 solutions. When using the MLLP adapter, acknowledgements (ACK) are required. Delays can occur because an ACK must be received before the next message is sent.

In BizTalk Server 2013, the next message can be sent without waiting for an ACK from the previous message. When the ACK arrives, it is internally correlated/connected with its message.

What was the Problem?

If you spend some time snooping around the blogs on MSDN from those involved in BizTalk Server development, you are likely to come across this post (back from when BizTalk Server 2013 hadn’t yet been christened 2013).

The post explains that in previous versions of BizTalk Server, whenever a host had a set of messages queued up and ready for delivery, if the first one was destined for an ordered Send Port, then the Host would be blocked until delivery of the message was successful (regardless of available threads, or other non-ordered Send Ports).

So what is the fix?

In the 2013 version of BizTalk Server, instead of relying only on the Host Queue, and blocking all messages that follow an ordered message, each Send Port with ordered delivery enabled is dedicated its own thread within the Host Instance, and messages sent to those locations will not block any other Send Ports that also have messages ready and waiting.

How Do I Take Advantage of This New Functionality?

That’s the really beautiful part about this feature; it’s built into the Messaging Engine. There is nothing special that you have to configure to enable this behavior, no code that you have to write – it’s just there and better by default. The only thing to keep in mind, is that since each ordered send port is dedicated its own thread, you may want to go to the BizTalk Settings Dashboard, and ensure that the  Messaging Engine Threads is appropriately configured to accommodate those Send Ports along with any other work that the Host has been assigned.

Best BizTalk Server Version Yet

If you haven’t made the move to BizTalk Server 2013 yet, or at least explored the evaluation to see what it has to offer, I would strongly encourage you to do so. Of course, if you want some dedicated hands-on time and instruction at the same time, I highly recommend our completely overhauled BizTalk 2013 Administrator Immersion class.

That’s all for this week. I’m back in the saddle after spending some time on vacation, so I will be back again next week. Until then, take it easy!

Getting Started with the ESB Toolkit 2.2 Extensibility Samples in BizTalk Server 2013

This post is the nineteenth in a weekly series intended to briefly spotlight those things that you need to know about new features in BizTalk Server 2013.

This week I wanted to highlight something about the integration of the ESB Toolkit 2.2 within the installation media of BizTalk Server 2013, but I didn’t want the post to end there – providing no more value than a quick perusal of the MSDN docs. As a result, while I will rehash some of that content in the first few sections of this post, I also want to dive into something that is of greater direct interest to the community at large – Itinerary Designer extensibility in 2013.

Now that I’ve spilled the beans, yes BizTalk Server ESB Toolkit 2.2 can now be found on the installation media for BizTalk Server 2013. It lives in the ESBT_x64 and ESBT_x86 folders alongside the core BizTalk setup.

Though the what’s new page for BizTalk Server 2013 lists a known issue with the Intinerary Designer vsix file not being at the normal location, the initial installation actually appears to have installed the designer for me – so there were no direct issues there – and it was definitely a welcome change. Though I will admit that I was more than a little bit worried when I spotted the BizTalk Server 2010 logo in the installer:

image

Basic Configuration

Another change I noticed along the way of installing the ESB Toolkit is some changes to the way the configuration tool works. Though these changes were certainly more welcome. I am currently running BizTalk on a single-server Windows Azure VM for the purposes of writing this specific blog post. I’m really not a big fan of putting a lot of setup work into such a one-off effort. I’ve even gone so far as to have a general purpose setup script that I use to install Visual Studio / Office / and configure BizTalk each week. So I was definitely happy when I saw that something akin to BizTalk Server’s “Basic Configuration” is now available for ESB Toolkit 2.2:

image

While it’s unlikely that I would do a blanket default settings setup for a production environment, it certainly is helpful at development time at least.

Easier Installation of Microsoft.Practices.ESB Application

Another welcome addition to the configuration tool is the ESB BizTalk Applications page. This page allows you to install the BizTalk applications which provide the core Itinerary On-ramps as well as the JMS/WMQ components. This removes the step of having to dig around for the correct MSI in the installation directory of the Toolkit, and running through the typical import/install dance:

image

While I was there, I did notice that the Help menu doesn’t actually contain any menu items – something that will likely go unnoticed as so few tend to actually read the manual.

In this version, I would have liked to see integrated publishing of the requisite tModels to UDDI here in order to support the UDDI resolver (without having to result to the UDDI publishing command line tool). I would also liked to have seen more granular editing/interaction directly within the tool for configuration stored in SSO (at the very least to be able to lookup effective configuration).

That being said, the additions that did make it in here are certainly welcome.

Examining the Designer Extensibility Samples

Now that we have the initial configuration out of the way, let’s get to something a little bit more interesting. I have seen some interest on the MSDN forums lately around ESB extensibility, so I wanted to dig in and see how everything fit together in the ESB Toolkit 2.2 + BizTalk Server 2013 world.

As a result I turned to one of my favorite samples that ships with the Toolkit (in the ESBSource.zip file, found in the installation directory for the Toolkit) —  the Designer Extensibility Samples. This set of samples demonstrates how you can provide access to custom components (e.g., custom Resolvers, custom Orchestration-based Itinerary Services) within the Itinerary designer.

One of the samples contained shows how to create a designer extension that allows the setting of typed properties on an Orchestration-based Itinerary Service. This will allow you to avoid using a Resolver to specify configuration settings (or creating a custom resolver for that purpose). In order to retrieve the settings, the Orchestration-based service can make use of the  PropertyBag property of the IItineraryStep instance related to the step — which allows you to enumerate a set of KeyValuePair<string, string> (this is the mechanism currently used by the Broker Service to store its specialized configuration data (you can see an example of an itinerary including the broker here).

The main magic for this sample is really just the use of the PersistInPropertyBag attribute on a property within a class that derives from ItineraryServiceExtenderBase, and is decorated with the ObjectExtender attribute. But getting it working in 2013 was no small task.

Troubleshooting the Designer Extensibility Samples

It seems as if samples are the last things touched during a major version upgrade, and this was no exception. To get this puppy going, I had to do some work. The first thing I had to do was to install both the (1) Microsoft Visual Studio 2012 Visualization & Modeling SDK (formerly DSL SDK) and (2) Enterprise Library 5.0.

After those pre-requisites were out of the way, I had to (3) add a reference to System.ComponentModel.DataAnnotations (because for whatever reason, Visual Studio outright refused to believe that the StringLengthValidatorAttribute from EntLib was indeed an attribute. I checked the inheritance chain and we had ValueValidatorAtribute : ValidatorAttribute : BaseValidationAttribute : ValidationAttribute : System.Attribute.

While this inheritance chain has a depth worthy of the label Enterprise, it does correctly end in System.Attribute. The kicker here (and thus the reason for adding the reference) is that ValidationAttribute is from the System.ComponentModel.DataAnnotations assembly. Ugh.

The final bit of voodoo that was required was to build and copy the output to the Lib folder for the Itinerary Designer. So where is that path? Well, while it used to live in two different places (one under the current user’s AppData folder), that is all changed around with Visual Studio 2012. Now, you can get away with copying it to the magical path: C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\Extensions\Microsoft.Practices.Services.Itinerary.DslPackage\Lib

Once building both of the samples, and copying them to that Lib folder, you will have the ability to create an Itinerary Service and specify the OrchestrationSample Itinerary Service Extender. This will populate the Properties window with properties that can be persisted to the Property Bag as shown:

image

Additionally, the Resolver sample extender also functions beautifully1, and gives us some nice properties to work with: image

Final Thoughts

This week, I don’t really have any greatly inspired final thoughts (which is looking like it’s somewhat of a pattern now). I am happy to see ESB Toolkit more directly integrated with the product, and not something tossed completely to the side. I definitely welcome the improvements that are there so far, and welcome any further that will make lives easier – especially with regards to the sample.

It looks like we’re probably going to be spinning up our courseware development efforts around ESB Toolkit 2.2 pretty soon here — so keep watching this space for a class announcement when it becomes available.

NOTE: Next week, I will be at Disneyland on vacation with my family. While I do volunteer evenings and weekends to put these longer posts together for (hopefully) the benefit of the community, I’m not about to do that while on vacation. As a result, there will not be a new post in the series that week. The series will resume the week of October 7th. Until then, take care! 🙂

1 I just want to give a quick shout-out here to Dean Kasa at Vancity who posed the question of if this can work in ESB 2.2 – looks like it can to me, sorry I can’t reproduce the issue you’re seeing)

BizTalk Server 2013 New Features Series Index

I thought it would be wise to create a post that can serve as a continually updated index of the articles that I have written within the BizTalk Server 2013 New Features Series here on the QuickLearn Team Blog. This post is that listing, and it also includes links to the code developed for each post where applicable.

Without further ado, here’s where we have been so far:

  1. What the BizTalk Server 2013 Mapper Updates Mean for You
    Introducing XslCompiledTransform and returning null values from functoids
  2. Acknowledging a Subtle Change in BizTalk Server 2013
    Examining X12 999 Acknowledgments
  3. Windows Azure Service Bus Queues + BizTalk = Out of Body Experiences
    How to handle messages that are only properties [Code Sample]
  4. SharePoint Integration without Cheating in BizTalk 2013
    Introducing the Client-side Object Model support in the improved SharePoint adapter.
  5. Why Updated Platform Support Matters
    Using PipelineTesting library in concert with the Fakes/Shims support of Visual Studio 2012 [Code Sample]
  6. A Brief Tour of the BizTalk Server 2013 Evaluation Windows Azure Virtual Machine
    How to quickly setup the BizTalk 2013 IaaS offering
  7. Publishing a WCF Service via Service Bus Relay in BizTalk Server 2013 (Part 1)
    Using the installed-by-default AppFabric connect functionality to expose/consume a NetTcp endpoint in IIS via Service Bus Relay — for a client that understands Service Bus Relay
  8. Publishing a WCF Service via Service Bus Relay in BizTalk Server 2013 (Part 2)
    Using the new adapters to expose/consume a BasicHttp endpoint hosted In-Process via Service Bus Relay — so that it is transparent to the client
  9. Automating and Managing BizTalk Server 2013 with PowerShell
    Shows off the soafactory PowerShell Provider for BizTalk while also trying to give PowerShell 101
  10. Exploring the Out-of-the-Box Support for SFTP in BizTalk Server 2013
    Using the SFTP adapter to talk to an Ubuntu Linux box on Azure
  11. Dynamic Send Port Improvements in BizTalk Server 2013
    Shows off the new Send Handler Mapping dialog for Dynamic Send Ports
  12. BizTalk Server 2013 and TFS 2012 – Why Can’t I Lock the Files
    Shows how to configure your TFS workspace to be BizTalk ready
  13. BizTalk Server 2013 Support for RESTful Services (Part 1/5)
    GETing Data + Context Property URL Mapping
  14. BizTalk Server 2013 Support for RESTful Services (Part 2/5)
    Receiving POSTed XML + URL Rewriting [Code Sample]
  15. BizTalk Server 2013 Support for RESTful Services (Part 3/5)
    POSTing UrlEncoded Data to ASP.NET Web API Service [Code Sample]
  16. BizTalk Server 2013 Support for RESTful Services (Part 4/5)
    JSON Decoding + Cross-domain communication with JSONP and CORS [Code Sample]
  17. BizTalk Server 2013 Support for RESTful Services (Part 5/5)
    Using OAuth to authenticate with cloud service providers’ APIs
  18. BizTalk Server 2013: Discovering Dependencies
    Exploring the Dependencies Statistics Panel
  19. Getting Started with the ESB Toolkit 2.2 Extensibility Samples in BizTalk Server 2013
    Examining ESB Install/Configuration Process for Development / Troubleshooting the Designer Extensibility Samples
  20. BizTalk Server 2013 Ordered Delivery Improvements
    Discussing the specific improvements that are seen in how BizTalk Server 2013 handles Ordered Delivery.
  21. BizTalk Server 2013: Eliminating the Dependency on SQL Server 2005 Notification Services
    Configuring BAM Alerts without touching SQL Server 2005 Notification Services.
  22. Building Blocks of Windows Azure BizTalk Services
    Overview of Windows Azure BizTalk Services Features.
  23. Mapping Code Pairs in the WABS Mapper
    Demonstrates attempted approaches for dealing with EDI code pairs using the new WABS mapper.
  24. Top 5 Indicators of the BizTalk Server 2013 Community’s Vitality
    Survey of the BizTalk Server 2013 ecosystem

BizTalk Server 2013: Discovering Dependencies

This post is the eighteenth in a weekly series intended to briefly spotlight those things that you need to know about new features in BizTalk Server 2013.

This week, I have decided to spotlight a simple feature that probably won’t be readily noticed unless you know it’s there, and you’re looking for it: Dependency Tracking1. This feature can prevent you from being yelled at by a dreaded 4-6 line error dialog, or other unforeseen consequences, providing you check dependencies before removing or redeploying an artifact.

the New Dependency Statistics Panel

If you have spent very much time in the new BizTalk Server 2013 Administration Console, you may have noticed that there is a new panel that shows up along the bottom:

image

If you’re anything like me, then it probably took you a while to figure out how to make this panel display information. As you can see, it hangs out at the bottom of the screen devoid of content by default.

In order to get those statistics, you have to access the context menu of the artifact that you’re interested in, and then click View Dependencies:

image

This will populate the two sections of the panel with (1) number of items that depend on the artifact, and (2) the number of items that the artifact depends on:

image

Clicking on one of these numbers will take you to the matching items with a breadcrumb to bring you back into context:

image

While none appeared while viewing the Orchestration dependencies, the items may include Schemas, Maps, Pipelines, Orchestrations, Receive Locations, Receive Ports, and Send Ports (e.g., below where we have Receive Locations / Maps in play):

image

Discovering Dependencies Not Found in the Binaries

As you continue to drill-down and view dependencies of each item, you will see the bread crumb update with the dependency chain. Here I can see that the original Orchestration I was examining ultimately depends on the JsonReceive pipeline (due purely to configuration in this case):

image

The Little Things

This is one of those little things that I have not yet had to make heavy use of, but I can see how it will bring value while tearing through more complex integrations trying to make heads or tails of what’s really going on.

If you want to try this out for yourself and get a glimpse of the Admin experience in BIzTalk Server 2013, come join us in October, November, or December for our BizTalk 2013 Administrator Immersion class.

1 Interestingly enough, this is the first feature highlighted in a similar series that Nitin Mehrotra has been running on his blog.

BizTalk Server 2013 Support for RESTful Services (Part 5/5)

This post is the seventeenth in a weekly series intended to briefly spotlight those things that you need to know about new features in BizTalk Server 2013. It is also the final part of a five-part series on REST support in BizTalk Server 2013.

I’m going to forego a long introduction this week by simply stating outright that we are going to see how it is possible to do OAuth with BizTalk Server 2013. That being said up front, I do want to take a few moments to recall where we have been, and what has led up to this post.

Re-cap of the RESTful Services Series

We’ve covered quite a bit of ground so far in this mini-series. If you have not yet read the parts leading up to this post, you will want to do so before we begin. You can find the pervious posts here:

Giving BizTalk 2013 the Valet Keys

If you’ve been listening to the messaging and buzz around BizTalk Server 2013, it would appear that one of the major focuses of the 2013 version is all about enabling integration with cloud-based services. Most of these services expose APIs that make it possible to automate that integration, and a growing number of those are RESTful APIs. Each service provider can choose the methods with which they perform authentication and authorization. However a fairly common choice1 is to use OAuth for authorization.

Let’s imagine that we’re going to use BizTalk Server 2013 to call the Salesforce API. Before we can make the API call, a call is made to request a special token (essentially a valet key) that will provide limited access to specific functionality within Salesforce for BizTalk Server’s use.

This means for each call that is made, a corresponding token request call must be made. How can we handle this? Ultimately, we could make BizTalk server do this any number of ways. We could use a custom pipeline component in the Send pipeline to request the token before making the call. We could setup an orchestration to coordinate these two sends – or we could even call into custom .NET code which will get the token. However, as with the CORs headers last week, I would rather push this need closer to the transport (since that’s closer to where the issue originates).

Making WCF Do the Heavy Lifting

Since, in our fictional scenario here, we’re dealing with a RESTful API, the WCF-WebHttp adapter will be in play. As a WCF adapter, we are able to take advantage of the WCF extensibility model during configuration via the Behavior tab of the adapter configuration:

image

What does this enable us to do? At the end of the day, it allows us to take full control of the message right before being transmitted over the wire (and even right after being received over the wire in the case of a reply). For the purposes of implementing OAuth, we can actually simply look to a portion of one of the tutorials that shipped as part of the product documentation for BizTalk Server 2013 on MSDN: Integrating BizTalk Server 2013 with Salesforce.

By reading the document linked above, you will find that the tutorial is taking advantage of the IClientMessageInspector interface to provide a last minute hook into the communications process. During that last minute in the BeforeSendRequest method, the OAuth token is being retrieved through a specially constructed POST request to the OAuth authorization endpoint (which can differ depending on which instance to which you have been assigned):

    private void FetchOAuthToken()
    {
        if ((tokenExpiryTime_ == null) || (tokenExpiryTime_.CompareTo(DateTime.Now) <= 0))
        {
            StringBuilder body = new StringBuilder();
            body.Append("grant_type=password&")
                .Append("client_id=" + consumerKey_ + "&")
                .Append("client_secret=" + consumerSecret_ + "&")
                .Append("username=" + username_ + "&")
                .Append("password=" + password_);

            string result;

            try
            {
                result = HttpPost(SalesforceAuthEndpoint, body.ToString());
            }
            catch (WebException)
            {
                // do something
                return;
            }

            // Convert the JSON response into a token object
            JavaScriptSerializer ser = new JavaScriptSerializer();
            this.token_ = ser.Deserialize<SalesforceOAuthToken>(result);
            this.tokenExpiryTime_ = DateTime.Now.AddSeconds(this.sessionTimeout_);
        }
    }

NOTE: I do not advocate merely swallowing exceptions. You will want to do something with that WebException that is caught.

After the token is returned (which is decoded here using the JavaScriptSerializer as opposed to taking a dependency on JSON.NET for the JSON parsing), an Authorization header is appended, to the request message, containing the OAuth token provided by the authorization endpoint.

WebHeaderCollection headers = httpRequest.Headers;
        FetchOAuthToken();

        headers.Add(HttpRequestHeader.Authorization, "OAuth " + this.token_.access_token);
        headers.Add(HttpRequestHeader.Accept, "application/xml");

Registering Extensions

While the tutorial provides instructions for registering extensions via the machine.config file, a ready alternative is available.

For any WCF-based adapter, each host can provide unique configuration data specifying extensions that are available for use by that adapter when executed by the host. Performing the configuration here can ease the configuration burden on those BizTalk groups containing more than a single machine.

image

Ready to REST

I highly recommend that you read that tutorial, and spend some time replicating that solution so that you can see what BizTalk Server 2013 is bringing to the table with regards to integration with RESTful services.

I hope that this series has also accomplished the same – that you can see the possibilities, the potential extensibility points, and have a greater appreciation for BizTalk Server 2013 as the integration tool ready to tackle those more tricky hybrid on-premise meets cloud scenarios.

Until next week, take care!

1 Just because it’s common does not mean it is the least complex or best choice – but it does mean that it can end up being one you’re stuck with.

Visual Studio and TFS 2013 RC Available + new courses

The Release Candidates (RC) for Visual Studio 2013 and Team Foundation Server 2013 are now available for download from the MSDN Library. These new releases bring a suite of new and enhanced capabilities to help your development team succeed.

Some of my favorite new features, in no particular order, include;

  • Code Lens (Code indicators right in your code)
  • Browser Link (Awesome for web developers)
  • Agile portfolio management (Hierarchical requirements in TFS)
  • GIT support for on-premise TFS (More options in case GIT is your VC of choice)
  • Team Room (A great place for team collaboration)
  • Enhanced web based Test Case Management

To coincide with the official launch of Visual Studio 2013 and Team Foundation Server 2013, QuickLearn will be launching a new and expanded training curriculum next month. Our content developers have been busily validating our new content against the RC release in anticipation of our biggest TFS curriculum upgrade since we started delivering TFS training over 4 years ago.

Keep an eye out for our next newsletter which will provide details for eight VS/TFS2013 courses including brand new self-paced, online courses as well as our popular live courses.

BizTalk Server 2013 Support for RESTful Services (Part 4/5)

Did you click a link from the newsletter expecting a post on ESB Toolkit 2.2? Head right over here. Otherwise, keep reading!

This post is the sixteenth in a weekly series intended to briefly spotlight those things that you need to know about new features in BizTalk Server 2013. It is also the fourth part of a five-part series on REST support in BizTalk Server 2013.

If you haven’t been following the REST series, stop reading right now, and then read from the beginning of the series. I’m not re-covering any already covered ground in this post. Winking smile

In this post we will discuss dealing with incoming JSON requests, such that they become readable/map-able XML for which we can generate a schema. Ultimately there are two ways that this can be accomplished. Both of these methods have actually already been extensively documented. The first method is by using a custom pipeline component (demonstrated here by Nick Heppleston), the second is by extending WCF with a custom behavior (briefly discussed here by Saravana Kumar).

Both of the articles linked in the above paragraph really only deal with the receive side of the story. In this article I will set out to demonstrate both receiving JSON data, and responding with JSON data. In the process I will demonstrate two methods of dealing with cross-domain requests (CORS and JSONP), and build pipeline components and pipelines to handle both scenarios.

Getting Started with a Custom Pipeline Component

In order to get started with custom pipeline component development, I will be reaching again to the BizTalk Server Pipeline Component Wizard (link points to the patch for BizTalk Server 2013, which requires InstallShield LE and an utterly maddening registration/download process). I am going to use the wizard to create two projects – one for a JsonDecoder pipeline component and one for a JsonEncoder pipeline component.

Each of these projects will be taking a dependency on Newtonsoft’s excellent Json.NET library for the actual JSON serialization/deserialization. This will ultimately be done through the official NuGet package for Json.NET.

Since we can have JSON data that starts as just a raw list of items, we likely want to wrap a named root item around that for the purpose of XML conversion. In order to choose the name for that root node, we will rely on the name of the operation that is being processed. However, in the case of a single operation with a lame name, we can expose a pipeline component property to allow overriding that behavior.

Another pipeline component property that would be nice to expose is a target namespace for the XML that will be generated.

Creating XML from JSON Data

Creating XML from JSON formatted data is actually supported out of the box with the JSON.NET library. We can do so with the following code:

var jsonString = jsonStream.Length == 0
                 	? string.Empty
                        : Encoding.GetEncoding(inmsg.BodyPart.Charset ?? Encoding.UTF8.WebName)
				  .GetString(jsonStream.ToArray());
           
// Name the root node of the generated XML doc after the operation, unless
// a specific name has been specified via the RootNode property.

var rawDoc = JsonConvert.DeserializeXmlNode(jsonString,
				string.IsNullOrWhiteSpace(this.RootNode)
					? operationName
					: this.RootNode, true);

// Here we are ensuring that the custom namespace shows up on the root node
// so that we have a nice clean message type on the request messages

var xmlDoc = new XmlDocument();
xmlDoc.AppendChild(
	xmlDoc.CreateElement(DEFAULT_PREFIX, rawDoc.DocumentElement.LocalName, this.Namespace));
xmlDoc.DocumentElement.InnerXml = rawDoc.DocumentElement.InnerXml;

In the above snippet, you are seeing the operationName variable out of context. That variable will contain the value of the current operation requested (which is defined by you within the adapter properties for the WCF-WebHttp adapter, and then matched at runtime to the HTTP Method and URL requested).

Another weird thing that we are doing here is making sure that we have full control over the root node, by allowing it to be generated by the JSON.NET library, and then replacing it with our own namespace-qualified root node.

Creating JSON from XML Data

Taking XML data and generating JSON formatted data is nearly as easy:

XmlDocument xmlDoc = new XmlDocument();
xmlDoc.Load(inmsg.BodyPart.Data);

if (xmlDoc.FirstChild.LocalName == "xml")
	xmlDoc.RemoveChild(xmlDoc.FirstChild);

// Remove any root-level attributes added in the process of creating the XML
// (Think xmlns attributes that have no meaning in JSON)

xmlDoc.DocumentElement.Attributes.RemoveAll();

string jsonString = JsonConvert.SerializeXmlNode(xmlDoc, Newtonsoft.Json.Formatting.Indented, true);

There are a few lines of codes there dedicated to dealing with some quirks in the process. Namely if you do not remove both the potentially pre-pended “xml” processor directive node, or the namespace attributes on the root node, these will show up in the JSON output – yikes!

Creating Schemas for JsON Requests and Responses

Assuming that the data will be in XML form whenever BizTalk gets a hold of it, how can we generate a representative schema without much pain? Our best bet will be simply taking some JSON, throwing it through the same process above, and then generating a schema from the resulting XML.

If you haven’t already, you will need to install the Well Formed XML schema generator by heading over to the C:\Program Files (x86)\Microsoft BizTalk Server 2013\SDK\Utilities\Schema Generator folder and then running the InstallWFX.vbs script that you will find there:

image

In order to make it easier to run some JSON formatted data through this process, I created a quick no-frills sample app that can do the conversion and save the result as an XML file:

image

Once we have the output of that application, we can run it through the schema wizard, and end up with a schema that looks somewhat like the following (for which much polishing is required with regards to the selection of data types):

image

Dealing with Cross-Site Requests

Modern browsers will be quite leery of allowing for cross-domain requests (i.e., requests originating from a site on one domain which target a service on a different domain). That is not to say, however, that every instance of this is inherently bad.

One of the things that I wanted to make sure of, is that I could publish a service which would allow for requests originating from a different domain. There are two common approaches for allowing this to happen.

One of them is handled at the http level through specialized headers that are passed through a series of requests to ensure that a service is willing to be called from a different domain. This method is known as CORS or Cross-Origin Resource Sharing.

The other method is much more hacky and really ought to be reserved for those cases where the browser can’t cope with CORS. This other method is JSONP (JSON with Padding).

Examining CORS

Let’s start this whole discussion by taking a look at CORS first. Assume that we have the following code:

var request = $.ajax({
	url: $("#serverUrl").val(),
	type: 'POST',
	contentType: 'application/json',
	data: JSON.stringify(data),
	processData: false,
	dataType: 'json'
})

Now assume that the item on the page with the id serverUrl contains a URL for a completely external site. If that is the case, and you’re using a modern browser that supports CORS, you will see something like this if you watch your HTTP traffic:

image

Before the actual request is sent (with the data posted in the body), a simple OPTIONS request is issued to determine what is possible with the resource that is about to be requested. Additionally it includes Access-Control-Request headers that indicate which HTTP Method is desired, and which headers are desired (to be sent). Here the server must respond in kind saying that it allows requests from the origin domain (notice the origin header above calling out the site originating the request), and saying that it supports the Method/Headers that the origin site is interested in.

If it does not respond in the correct manner, the conversation is over, and errors explode all over the place.

Getting back to the BizTalk world for a second, if your REST Receive Location isn’t expecting this, you will see this lovely response:

image

Technically, you wont’ see any Access-Control headers at all in the response (I took the screenshot after un-configuring some things, but apparently I missed that one). Ultimately the error comes down to a contract mismatch. More specifically it will read something like this:

The message with To ‘http://your.service.url.here.example.org/service.svc/operation cannot be processed at the receiver, due to an AddressFilter mismatch at the EndpointDispatcher.

This OPTIONS request actually needs to become part of your contract for each endpoint for which you wish to allow cross-domain communications. Additionally, you have to be ready to provide a happy and valid response for this message.

Configuring IIS to Send CORS Headers

Ultimately there’s a few ways we could inject the headers required to make CORS happen. I’m going to go with having IIS inject those headers at the server level (rather than doing it at the adapter configuration level). I am going to do this at the level of the virtual directory created by the WCF Service publishing wizard when I published my REST endpoint. Examining the Features View for that virtual directory we should find HTTP Response Headers under the IIS group:

image

From there, I can begin adding the headers for those things that I would like to allow:

image

Above, I’m setting a header which indicates that I wish to allow requests from any origin domain. After some mucking around with fiddler to find out what my little test page would actually be requesting using IE, I ended up with this set of headers that seemed to make the world happy:

image

Pesky Empty Messages

Even with the correct headers being sent back in the response, the response code will still be 500 until we can properly deal with the incoming empty OPTIONS message, and ensure that we are routing a response back such that a 200 response code is returned.

In BizTalk Server the easiest way to get a message from the receive pipeline to route back to the send pipeline is through the RouteDirectToTP property. This is the same property that ensures that EDI Acknowledgements are routed directly back to the trading partner on the send pipeline of the receive port, and the same property used by the ESB Toolkit to create the Forwarder pipeline component (short circuiting a default instance subscription for messaging-only request/response service requests, and restoring it after multiple requests have been made).

So how are we going to use it here? Well we’re going to detect if we’re dealing with a CORS message in the pipeline. If we are, then we are going to promote that property with a value of “true” to indicate that we want that empty message to flow right back to the response pipeline as the empty body of our lovely 200 OK response. The code for that looks a little something like that (use your imagination or github for the values of those constants):

#region Handle CORS Requests

// Detect if the incoming message is an HTTP CORS request
// http://www.w3.org/TR/cors/
            
// If it is, we will promote both the RouteDirectToTP property and the
// EpmRRCorrelationToken so that the request is immediately routed
// back to the send pipeline of this receive port

object httpMethod = null;
httpMethod = inmsg.Context.Read(HTTP_METHOD_PROPNAME, WCF_PROPERTIES_NS);

if (httpMethod != null && (httpMethod as string) == OPTIONS_METHOD)
{
	object curCorrToken = inmsg.Context.Read(EPM_RR_CORRELATION_TOKEN_PROPNAME, SYSTEM_PROPERTIES_NS);
	inmsg.Context.Promote(EPM_RR_CORRELATION_TOKEN_PROPNAME, SYSTEM_PROPERTIES_NS, curCorrToken);
	inmsg.Context.Promote(ROUTE_DIRECT_TO_TP_PROPNAME, SYSTEM_PROPERTIES_NS, true);

	var corsDoc = new XmlDocument();
	corsDoc.AppendChild(corsDoc.CreateElement(DEFAULT_PREFIX, CORS_MSG_ROOT, JSON_SCHEMAS_NS));
	writeMessage(inmsg, corsDoc);

	return inmsg;
}

#endregion

The reason that we are bothering to create an XML document out of this empty message is that this decode component will be followed by an XML Disassemble component that will need to recognize the data. By the end of this post we will have two special case XML Documents for which we will create schemas. In the case of this CORS message, we will need to ensure that the JsonEncoder returns this XML as an empty response body. The following code takes care of this other side of the equation:

#region Handle CORS Requests

// Detect if the incoming message is an HTTP CORS request
// http://www.w3.org/TR/cors/

object httpMethod = null;
httpMethod = inmsg.Context.Read(HTTP_METHOD_PROPNAME, WCF_PROPERTIES_NS);

if (httpMethod != null && (httpMethod as string) == OPTIONS_METHOD)
{
	// Remove the message body before returning
	var emptyOutputStream = new VirtualStream();
	inmsg.BodyPart.Data = emptyOutputStream;

	return inmsg;
}

#endregion

At this point we should be able to receive the options message through the receive pipeline, and have it property route back through the response pipeline. Beyond that, we just need to make sure that we pair our actual resources, within the operation mapping of the adapter config, with mappings for the CORS request:

<Operation Name="CrossDomainCheck" Method="OPTIONS" Url="/ItemAverageRequest" />
<Operation Name="ItemAverageRequest" Method="POST" Url="/ItemAverageRequest" />

Examining JSONP

Well that was one way that we could allow for cross-domain requests – requests that allowed for the POSTing of lots of wonderful data. But what if we just want to get some data, we don’t want to waste HTTP requests, and we’re not using a browser that supports CORS (or maybe we just want to live dangerously)? There is an alternative – JSONP.

Essentially this is a cheap hack that got its own acronym to legitimize how terrible of an idea it actually is.

I think the best way to explain this one is to examine a sample code snippet, and then the conversation that we want to get out of it:

var request = $.getJSON($("#serverUrlJsonp").val() + "?callback=?");

Notice the ?callback=? portion of that snippet. It’s appending a querystring parameter named callback to the URL with a value to be filled in by jQuery. So what will it fill in? A randomly generated function for receiving data back from the server. But how is the server going to execute local code within the browser you might ask? Let’s take a look:

image

jQuery seems to populate the callback querystring parameter with a suitably random value. Check out what happens in the response. The response is a function call to that randomly generated callback function. The way JSONP is cheating the system to get a cross-domain request working is by injecting a script node into the DOM with the source set to the service (passing the callback name in the querystring). Then as the service responds, the response is executed as JavaScript code. This is definitely one where you’re putting a lot of trust in the service.

So how do we make it work with BizTalk?

Capturing the JSONP Callback Name on Receive

JSONP’s ability to work depends on the ability of the service to craft a call to the callback method named by the callback querystring parameter. The easiest way to capture this is through an operation mapping combined with variable mapping:

<Operation Name="GetItemListJsonp" Method="GET" Url="/ItemsList?callback={callback}" />

image

Essentially what we’re doing here is relying on the adapter to promote the callback function’s name to the context of the message, so that it will be available as we generate the response JSON.

Pipeline Component Changes to Deal with JSONP

In order to deal properly with JSONP, we will have to make some minor modifications to both the JsonDecoder and the JsonEncoder pipeline components. Starting with the JsonDecode component which will be executing upon initial receipt of our request message, we will add the following code:

#region Handle JSONP Request

// Here we are detecting if there has been any value promoted to the jsonp callback property
// which will contain the name of the function that should be passed the JSON data returned
// by the service.

object jsonpCallback = inmsg.Context.Read(JSONP_CALLBACK_PROPNAME, JSON_SCHEMAS_NS);
string jsonpCallbackName = (jsonpCallback ?? (object)string.Empty) as string;

if (!string.IsNullOrWhiteSpace(jsonpCallbackName))
{
	var jsonpDoc = new XmlDocument();
	jsonpDoc.AppendChild(jsonpDoc.CreateElement(DEFAULT_PREFIX, JSONP_MSG_ROOT, JSON_SCHEMAS_NS));
	writeMessage(inmsg, jsonpDoc);

	return inmsg;
}

#endregion

This code is examining the callback context property to determine if there is any value promoted. If there is, it is assumed that this is a JSONP request. It also assumes that all JSONP requests will have an empty body, and as such will require a placeholder node in order to make it through the following XmlDisassembler.

On the response side, these are the modifications made to the JsonEncoder component:

#region Handle JSONP Request

// Here we are detecting if there has been any value promoted to the jsonp callback property
// which will contain the name of the function that should be passed the JSON data returned
// by the service.

object jsonpCallback = inmsg.Context.Read(JSONP_CALLBACK_PROPNAME, JSON_SCHEMAS_NS);
string jsonpCallbackName = (jsonpCallback ?? (object)string.Empty) as string;

if (!string.IsNullOrWhiteSpace(jsonpCallbackName))
	jsonString = string.Format("{0}({1});", jsonpCallbackName, jsonString);

#endregion

Here we’re jumping in at the last minute, before the JSON formatted data is returned, and wrapping the whole thing in a function call – if the callback function name exists in the context.

Building a Simple Proof of Concept Service

After burning an evening on these components, I decided it would only be worthwhile if I could actually use them to do something interesting.

The parameters I set out for myself was that I have to have a service that could be called cross-domain from a simple .html page with only client-side JavaScript goodness. As a result I resurrected my dual listener orchestration anti-pattern from one of my previous posts (a hijacked convoy pattern?) to create the following monstrosity:

image

So what’s going on here, and what does this service allow me to do? Well it exposes two resources, (1) ItemsList and (2) ItemAverageRequest. They both deal in “items”. There really is no meaning for what an item is, I just needed some thing to test with, and item was a nice fit.

The first resource, ItemsList is something that you can GET. As a result it will return a nice JSON formatted list of “items” each having an “amount” of some kind and a unique “id”.

The ItemAverageRequest resource is a place where you can POST a new ItemAverageRequest (really this is just a list of items exactly as returned by the first resource), and as a result you will receive the cumulative average “amount” for the items in the list.

The maps in the orchestration shown above ensure that these resources have the described behavior.

Testing the Service

So did all of this work pay off, and give us happy results? I built a simple test client to find out. Here’s what the JSONP request looks like against the ItemsList resource (the textarea in the background contains the raw response from the service):

image

The raw HTTP conversation for this request was actually shown above in the Fiddler screenshot from the JSONP discussion.

Leaving that data in the textarea as input, and then invoking the POST against the ItemAverageRequest resource yields the following:

image

The HTTP conversation for this request happened in two parts, represented below:

image

image

Summary

I really hope I was able to add something to the conversation around using JSON formatted data with the WCF-WebHttp adapter in BizTalk Server 2013. There’s certainly more to it than just getting the happy XML published to the MessageBox.

Was a pipeline component the right choice to handle protocol level insanity like CORS? Probably not, that part of it is something that we should have ultimately implemented by extending WCF and having the adapter deal with it — especially since CORS really doesn’t have anything to do with JSON directly. I’ll leave that to the commenters and other BizTalk bloggers out there to consider.

However, with JSONP, it becomes ultimately an output encoding issue, and since we had already dealt with one cross-domain communications issue in the pipeline, it was natural to handle CORS as well in the same fashion.

Before I sign off for the remainder of the week, I want to take this time to remind you that if you would like to take your BizTalk skills to the next level with BizTalk Server 2013, you should check out one of our upcoming BizTalk Server 2013 Deep Dive Classes, where you will learn all about fun stuff like custom pipeline component development and have an opportunity to get hands on with some of the new adapters!

If you would like to access sample code for this blog post, you can find it on github.