Flat file dissassemblers in BizTalk exposes the document schema property differently than XmlDissassmbler.
Flat file dissassmblers do not allow for multiple schema selections. You can only select 1 flat file schema.
The next option to go for is to use multiple flat file dissassemblers as you can have 0-255 components and it works with first match execution. However in practise this does not work like this and the given input only tries to match the first configured schema and hence can’t be used in scenarios where you want to have multiple schemas.
The second way is to separate out your receive locations and use separate pipelines but this can be an issue if the sending system generates the files for you as a single process and would like to send different files at the same locaion for the middlerware to handle.
To solve this need, the SDK comes with a SchemaResolver component that works on reading the content [First few characters] and determining what schema to use.
For a similar scenario, I wanted to have this based on the received file name.
We can read the file name and use the specified messagetype for schema resolving.
The DocumentSpec is retrieved use the messgaetype and written to context for dissassmbler to probe the related schema.
This is a very handly solution to have single receive for multiple flat files.
BizTalk 2013 R2 has CU5 up for grab 🙂
Quite a few important fixes.
One of the fixes that we reported/found was around the WSS adapter and it has been fixed here.
Get a copy and try out the fixes.
One of the major news in BizTalk Server 2016 is the full support for SQL AlwaysOn Availability Groups (AGs). When Microsoft announced this, the crowd cheered, and they moved on to the next slide. There are, however, some bits you should be aware of before deciding on your High Availability (HA) architecture for BizTalk.
SQL Server 2016 is the first (and only) edition to support AlwaysOn for BizTalk Server 2016. The reason is lack of support for MSDTC between BizTalk’s SQL Server databases and other transactional resources. For this reason, High Availability for the database layer in BizTalk scenarios has traditionally been solved by using Windows Server Failover Clusters (WSFC), typically with an active-passive configuration.
First of all, you need to run BizTalk Server 2016 Enterprise, SQL Server 2016 Enterprise, and Windows Server 2016 (or 2012 R2 with KB3090973).
As you may be aware of…
View original post 450 more words
For couple of weeks I am trying to have support for BTDF built msi’s deployment across servers using BADT.
I have a beta version for this feature implemented in V2.1 of the BADT. The latest tool can be downloaded from codeplex.
The UI looks as below where one can select the : (Beta) Deploy BTDF msi on farm.
Once the msi is loaded the tool populates the required actions and some configuration details.
The configuration tab has the list of Install Wizard configurations from BTDF and you can enter these details at one place and those will be used across all the servers in the environment. No need to enter same details again and again across servers.
Environment is populated from the SettingsFileGenerator.xml bundled within msi.
I have tested the tool with Sample Applications that are with BTDF (Helloworld and Advanced).
With limited use on BTDF, I am not sure if more actions are needed here and I don’t have more cases to test the tool too. I would request the community to try the feature and suggest issues/ideas to make this robust.
One of the few discussions and wishes around BizTalk application deployment has been around tackling dependent application deployment.
Consider a simple dependency scenario –>
Application A–>Dependent on Application B
Application B–>Dependent on Application C
If I want to deploy any changes to application A, I need to cleanup B and C ..then deploy A and re deploy application C and B in that order.
This is very scary scenario and poses a lot of challenges. Such situations should be avoided at the very first place while designing the solutions but nonetheless BADT would come with dependency resolver that should take care of organizing the deployment activities for you.
I created a vague not so frequent occurring dependency projects.
Now if there is any change in MyApplication.ApplicationA I need to resolve all the applications interdependent on each other to make sure the environment is up again after deployment of MyApplication.ApplicationA
The tool now comes with a tab page related to Dependent Applications and options to resolve the dependency from there.
You can right click on the application node and select the Msi for this. This Msi would be the package that will be used during deployment process to re-deploy dependent applications after resolving.
Once all the dependent application’s Msis are selected, the actions can be loaded. The load action resolves the dependency order and lists the actions in required order for deployment.
At this point you will notice that the order of deployment/ redeployment is resolved and listed. Select the actions and Run the tool.
Recommendations and ideas are most welcome. Type in your comments on how you see the feature and also the user interface/interaction shown above.
PS: This is continuation of this great blog post, and aims to address few drawbacks
One of the many frequent requirements in integration is delivering aggregated data to systems for further processing. BizTalk facilitates this and there are several aggregation pattern in BizTalk.
But for large message sets many of these patterns are insufficient. During a recent project we had a similar requirement. I came across this blog post that solved the problem for us. There were few tweaks that we introduced because there were few problems specifically the delay before the batch gets written was something not safe to assume for us as the batches were big in size and load on the server was high. So from a performance and reliability point of view we had to adjust it.
We also tweaked how the header and trailer are written to make it more generic.
A guaranteed delivery and write of the batch message was achieved by using the DeliveryNotification in BizTalk.
Delivery failure notification took care of writing the complete batch to file so this enabled removing of the delay while writing the batches and thus making the aggregation reliable.
If you prefer to use static port then the file adapter should have Append to existing file property selected and in case of dynamic port
Instead of writing Header and Footer while writing the file, we tweaked it to write header and footer while reading the file. While reading the file in the orchestrations we can construct the final message as below.
This can also be implemented at a pipeline level to make it more generic. Once the file is written to the disk , a receive location can be used to read the file and then send it to destination systems.
On the receive pipeline define.
With this implementaion it was posible to process 2800-3000 batches quickly and with low memory usage.
Transformation is one of the key elements in integration and BizTalk facilitates this in way which is one of the best.
Transformations in BizTalk can be done within the Orchestrations , or at the port level by specifying inbound or outbound maps.
Transformations can also be executed on pipelines so as to avoid persisting messages to database if the source is huge.
For example, you are receiving a big message say 100 MB which contains all the records extracted but for your business process within BizTalk we need only a subset of that, that meets some condition. One way of achieving this is by applying the filtering map at the port level and then feeding the business process orchestration with the smaller set. However there is limitation of one map that can be executed on port and for such scenario where we need to execute multiple maps on source the drawback here would be that whole message say 100-150 MB will be persisted to message box before further transforms applies and this becomes overhead if there are many such processes.
To overcome this, executing the map at the pipeline level before the persistence goes a long way in improving performance.
For this we can define a pipeline component with below properties.
MapFQN is the fully qualified name for the map to be executed.
(The same can be extended to include xsl transformations as well. Say a path to XSLT file that needs to be executed for transformation.)
Virtual stream takes care of disk offloading during transformation for big messages thus greatly reducing the server memory load.