...and everything in between RSS 2.0
# Monday, July 04, 2011

Sydney | SharePoint 2010 Bootcamp

 

“The best course I have done in years!”

“A fantastic course. Mick really has depth of knowledge and is a very engaging trainer”

 

REGISTER TODAY - 4 seats left!

Special offer of 15% discount if book & pay before June 30th 2011.

Overview

This is a 5-day bootcamp designed for both IT Professionals and Developers packed with fun and technical training to explore the features of SharePoint 2010 ‘out of the box’.

At course completion students will be able to upgrade their SharePoint V3 sites/portals to SharePoint 2010, to implement and extend Microsoft Office client side solutions, and also implement custom workflows developed in Visual Studio.

They’ll be equipped to care for their SharePoint farm, back it up and restore it, and set up and configure SharePoint 2010 infrastructure. Architecting the portal and sub-sites layouts is streamlined using best strategies and known best practices within the SharePoint space.

Students will create custom WebParts and SharePoint customisations easily, as well as site wide features, event handlers and InfoPath Forms based solutions. They will also explore Excel Services and Business Intelligence Offerings.

Be ready to roll up your sleeves and start your adventure here!

Date:                     Monday 25 – Friday 29 July 2011

Instructor: Mick Badran - MVP

Location:             Breeze Office

Edgecliff Court, Suite 5a

2 New McLean Street

Edgecliff NSW 2027

Time:                    8.30am – 4.30pm

Duration:             5 Days

Course Price:     $3,450.00 + GST

 

Register NOW: Emmav@breeze.net

Monday, July 04, 2011 9:28:22 PM (AUS Eastern Standard Time, UTC+10:00)  #    - Trackback
Breeze | SharePoint
# Thursday, April 07, 2011

Hit this little hurdle recently while creating WCF Data Service against Azure Table Storage. At the moment only a handful of operators are supported by the client library when using the Table Storage Service.

Supported Query Operators

LINQ operator

Table service support

Additional information

From

Supported as defined.

 

Where

Supported as defined.

 

Take

Supported, with some restrictions.

The value specified for the Take operator must be less than or equal to 1,000. If it is greater than 1,000, the service returns status code 400 (Bad Request).

If the Take operator is not specified, a maximum of 1,000 entries will be returned.

First, FirstOrDefault

Supported.

 

What this means is that we can not perform LINQ queries that group, order by, distinct or even return single entity properties from the query (we must always return the entire entity). In most situations the solution is to construct our LINQ queries that first make use of the supported operators and then use AsEnumerable() followed by any operations that are not supported. This generates two parts to the LINQ query. The first part (everything before the AsEnumerable) gets sent to the backend (Azure Table Storage in this case) and the remaining parts execute locally against the results of the first (in-memory). This helps get over the road-block but as you can image you are bringing a greater chunk of data down to the client and continuing processing there.

Some examples:

Using Distinct()

var query = myTableServiceContext.MyEntity.Where(e => e.Category == someCatgeory).AsEnumerable().Select(c => c.Name).Distinct();
 
Select next 5 entities after a given date and time (using OrderBy together with Take)
 
var query = myTableServiceContext.MyEntity.Where(e => e.Category == someCatgeory & e.StartDate > DateTime.UtcNow).AsEnumerable().OrderBy(o => o.StartDate).Take(5);

For further details check out the online documentation.

Thursday, April 07, 2011 8:32:00 PM (AUS Eastern Standard Time, UTC+10:00)  #    - Trackback
.NET Framework | WCF | Windows Azure
# Wednesday, April 06, 2011

Just thought I might share some useful dev tools I have either found or have had recommended to me.

The first is a must if you are doing any LINQ action in your code (…and most of us are in some degree these days).
Check out LINQPad. I am blown away how useful this tool has been. Think SQL Management Studio for LINQ!

linqpadscreen

Another great tool I have been using lately is Neudesic’s Azure Storage Explorer

ase4_blobs

Essential for generating and managing Azure table storage data during development.
Plays nicely with both developer storage and Azure storage accounts.

Wednesday, April 06, 2011 2:53:00 AM (AUS Eastern Standard Time, UTC+10:00)  #    - Trackback
.NET Framework | Windows Azure
# Tuesday, April 05, 2011

Released just last week, this codeplex project aims to make developing WP7 apps that talk to cloud storage easier to develop. Having been down that path over the last few days I was keen to test it out.

We get some nice new project templates:

image_41

But most importantly we get

  • A “working” version of the OData client library (System.Data.Services.Client)
  • A Windows Phone 7 Azure StorageClient library (WindowsPhoneCloud.StorageClient)

Just in time Smile with tongue out

Tuesday, April 05, 2011 10:36:29 PM (AUS Eastern Standard Time, UTC+10:00)  #    - Trackback
Windows Azure | Windows Phone 7
# Tuesday, January 25, 2011

I am finding that the development storage emulator has a few “undocumented features”. A few days ago, I was happily working through the Windows Azure Training Kit and things were going well. Today I was putting together a PoC using pieces learnt from the labs. I kept hitting a problem when trying to insert an entity into the newly created table storage (running on the local Storage Emulator). I was getting the generic error message when querying the collection:

“one of the request inputs is not valid”

var match = (from c in this.context.Clients
              where c.Name == name
              select c).FirstOrDefault();

 

Some things I tried that didn’t help:

  • Restarting the Storage Emulator a few times.
  • Restarting the machine (always worth a shot!)
  • Deleting the entries in the TableContainer and TableRow tables in the development storage DB.
  • Recreating the development storage DB using DSINIT /forceCreate.
  • Running around the office naked.

After hunting around for quite some time (including running the lab code again and getting them same result) I tracked it down to the table storage schema not being created after issuing

CloudTableClient.CreateTablesFromModel(
   typeof(MyDataServiceContext),
   storageAccount.TableEndpoint.AbsoluteUri,
   storageAccount.Credentials);

Note: This worked happily when I was working through the labs a few days ago. Neither my code nor the lab code was working now. Annoyed

Looking at the underlying DB (using SQLEXPRESS on my VM) I found no schema populated

underlying_table_storage

After some frustrating searching, I came across this post that suggested an ugly work around > azure-table-storage-what-a-pain-in-the-ass. It suggests that on the local storage emulator you need to “convince” the table service provider that you know what you are doing by inserting some dummy entities. This only appears to be needed when you have no data in your tables. So I added the following code in my data source constructor so it gets called by my service before performing any CRUD operations. 

// [WORK AROUND] See http://deeperdesign.wordpress.com/2010/03/10/azure-table-storage-what-a-pain-in-the-ass/
//  Generate some inserts to populate empty table
var client = new Client("dummy", "dummy");
this.context.AddObject("Clients", client);
this.context.SaveChanges();
this.context.DeleteObject(client);
this.context.SaveChanges();
 
var post = new Post("dummy", "dummy");
this.context.AddObject("Posts", post);
this.context.SaveChanges();
this.context.DeleteObject(post);
this.context.SaveChanges();
 

Ugly but it jumps the hurdle and allows me to get back to building out the rest of the solution. Just remember to comment it back out after you verify the schema xml has been populated successfully and your CRUD operations are going through.

Tuesday, January 25, 2011 1:53:33 PM (AUS Eastern Daylight Time, UTC+11:00)  #    - Trackback
Windows Azure
# Tuesday, November 30, 2010
HTTP Error 401.1 - Unauthorized: Access is denied due to invalid credentials.

"If I had a dollar for every time I’ve seen this…”

And yet the solution appears to be different each time. Or at least to me when it comes to issues with integrated Windows Authentication and Kerberos. Today the solution lay in forcing IIS to use NTLM authentication as suggested by the following KB article

http://support.microsoft.com/kb/871179

To work around this behaviour if you have multiple application pools that run under different domain user accounts, you must force IIS to use NTLM as your authentication mechanism if you want to use Integrated Windows authentication only. To do this, follow these steps on the server that is running IIS:

  1. Start a command prompt.
  2. Locate and then change to the directory that contains the Adsutil.vbs file. By default, this directory is C:\Inetpub\Adminscripts.
  3. Type the following command, and then press ENTER:

    cscript adsutil.vbs set w3svc/NTAuthenticationProviders "NTLM"

  4. To verify that the NtAuthenticationProviders metabase property is set to NTLM, type the following command, and then press ENTER:

    cscript adsutil.vbs get w3svc/NTAuthenticationProviders


    The following text should be returned:

    NTAuthenticationProviders       : (STRING) "NTLM"
Tuesday, November 30, 2010 8:49:29 PM (AUS Eastern Daylight Time, UTC+11:00)  #    - Trackback
BizTalk General
# Monday, October 18, 2010

On a recent project I needed to resolve the identity of clients calling an orchestration exposed as a WCF service. Clients would use a X.509 certificate to sign the message. Configuring the WCF service was easy enough but I was not getting the party resolution piece working correctly. The WCF adapter (I was using WCF-CustomIsolated) was not populating the context property (BTS.SignatureCertificate) that the party resolution component uses to lookup the party even though the client certificate was being validated. The WCF adapter was dumping the soap headers into the context. I was left either to parse the headers manually and find a way to grab details of the signing certificate or somehow get the WCF adapter to do this work for me (as it was already validating the client certificate and checking we had the corresponding public key in the certificate store). Fortunately, I found a way we can get the adapter to help out.

The solution was to create a WCF service behavior extension to intercept message processing by the adapter (note this takes place before the message is presented to the receive pipeline). The custom behavior looks for a client certificate and if found writes the thumbprint into a custom soap header. The WCF Adapter would then write my custom header into the message context and I could grab it in a custom pipeline component. I chose to write a component to execute before the OOTB party resolution component and populate the BTS.SignatureCertificate context property with the value of certificate thumbprint. I could of done this all in one component and performed custom party resolution but thought this might be a bit cleaner.

So looking at the WCF service behavior

   1:  using System;
   2:  using System.ServiceModel;
   3:  using System.ServiceModel.Dispatcher;
   4:  using System.ServiceModel.Channels;
   5:  using System.ServiceModel.Description;
   6:  using System.IdentityModel.Claims;
   7:  using System.ServiceModel.Configuration;
   8:   
   9:  namespace Breeze.WCF.ClientCertificateContext
  10:  {
  11:      public class MessageInspector : IDispatchMessageInspector, IServiceBehavior
  12:      {
  13:          #region IDispatchMessageInspector Members
  14:   
  15:          object IDispatchMessageInspector.AfterReceiveRequest(ref Message request, IClientChannel channel, InstanceContext instanceContext)
  16:          {
  17:              object correlationState = null;
  18:              string thumbprint = "";
  19:   
  20:              try
  21:              {
  22:                  // Gather thumbprint of signing certificate used by the client
  23:                  foreach (ClaimSet set in request.Properties.Security.ServiceSecurityContext.AuthorizationContext.ClaimSets)
  24:                  {
  25:                      foreach (Claim claim in set.FindClaims(ClaimTypes.Thumbprint, Rights.Identity))
  26:                      {
  27:                          thumbprint = BitConverter.ToString((byte[])claim.Resource);
  28:                          thumbprint = thumbprint.Replace("-", "");
  29:                      }
  30:                  }
  31:   
  32:                  // Write this away as a custom message header
  33:                  if (!String.IsNullOrEmpty(thumbprint))
  34:                  {
  35:                      MessageHeader header = MessageHeader.CreateHeader("ClientCertificate", "http://schemas.breeze.net/BizTalk/WCF-properties", thumbprint);
  36:                      request.Headers.Add(header);
  37:                  }
  38:   
  39:              }
  40:              catch (Exception ex)
  41:              {
  42:                  System.Diagnostics.EventLog.WriteEntry("WCF MessageInspector", String.Format("Exception caught: {0}", ex.ToString()));
  43:              }
  44:   
  45:              return correlationState;
  46:          }
  47:   
  48:          void IDispatchMessageInspector.BeforeSendReply(ref Message reply, object correlationState)
  49:          {
  50:          }
  51:   
  52:          #endregion
  53:   
  54:          #region IServiceBehavior Members
  55:   
  56:          public void AddBindingParameters(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase, 
                                                System.Collections.ObjectModel.Collection<ServiceEndpoint> endpoints, BindingParameterCollection bindingParameters)
  57:          {
  58:              return;
  59:          }
  60:   
  61:          public void ApplyDispatchBehavior(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase)
  62:          {
  63:              foreach (ChannelDispatcher channelDispatcher in serviceHostBase.ChannelDispatchers)
  64:              {
  65:                  foreach (EndpointDispatcher endpointDispatcher in channelDispatcher.Endpoints)
  66:                  {
  67:                      endpointDispatcher.DispatchRuntime.MessageInspectors.Add(this);
  68:                  }
  69:              }
  70:          }
  71:   
  72:          public void Validate(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase)
  73:          {
  74:              return;
  75:          }
  76:   
  77:          #endregion
  78:      }
  79:   
  80:      public class MessageInspectorElement : BehaviorExtensionElement
  81:      {
  82:          public override Type BehaviorType
  83:          {
  84:              get { return typeof(MessageInspector); }
  85:          }
  86:   
  87:          protected override object CreateBehavior()
  88:          {
  89:              return new MessageInspector();
  90:          }
  91:      }
  92:   
  93:   
  94:  }

 

Tip: Don't forget to implement the BehaviorExtensionElement. You’ll need this to apply the service behavior via configuration (in the receive location) rather then having to do it programmatically. You will also need to sign, GAC and register the service behavior extension element in the machine.config (or service’s web.config in IIS)

With the WCF service behavior bits done, we need to add it to our receive location:

image

If you were to test the solution now, you’ll get the thumbprint of the client certificate written to your custom context property (http://schemas.breeze.net/BizTalk/WCF-properties#ClientCertificate) and will look something like this:

   1:  <ClientCertificate xmlns="http://schemas.breeze.net/BizTalk/WCF-properties">11C3E164C41ADC8DBA0EA6558784B9FAE19E398D</ClientCertificate>

 

I had thought I might be able to get away with writing this directly into the BTS.SignatureCertificate context property but the format is clearly different. The BTS.SignatureCertificate property needs just the certificate thumbprint string and obviously we have the xml wrapper. So we must create a simple pipeline component to sit somewhere before the party resolution component to grab the certificate thumbprint out of our custom context property and write it into the context property the party resolver component is looking for.

image

After deploying and setting the receive pipeline to use the custom one above, I got party resolution working like a bought one with the BTS.SigningCertificate, BTS.SourcePartyID and MessageTracking.PartyName context properties populated.

I guess I was a little surprised that all this was needed. WCF does a great job of abstracting out all the transport and security bits and moving them to configuration time (no additional code in our service or client). In the HTTP and SOAP adapter days, the MIME/SMIME pipeline component was used to decrypt and validate the signing certificate as well as populating the required context properties. Why doesn’t the WCF Adapter perform this part in the same way? I mean, its doing the decoding, decrypting and certificate validation. So why not the populating of these context properties? Perhaps there is secret squirrel checkbox somewhere I missed. Love to hear comments if anyone has done this differently?

[Updated: 19-10-2010]

Thanks to Thiago (see comments section) we have been able to simplify this further. The WCF adapter provides some “special” namespaces that allow us to instruct the adapter to write context properties in a more controlled way. Specifically we can instruct the adapter to write directly into defined property schema elements (e.g. OOTB BizTalk property schemas or deployed custom property schemas). This allows us to write the certificate thumbprint directly into the BTS.SigningCertificate context property and avoid the need for the custom pipeline component to move the value from the custom header property into the BTS.SigningCertificate property as described above.

To do this we simply change the IDispatchMessageInspector.AfterReceiveRequest to make use of these special namespaces.

                // Write this away as a custom message header
                if (!String.IsNullOrEmpty(thumbprint))
                {
                    // Write the thumbprint directly to the BTS.SigningCertificate context property
                    //  Thanks to Thiago http://connectedthoughts.wordpress.com
 
                    // Create a collection of context properties we want the adapter to write/promote for us                    
                    XmlQualifiedName clientCertificateProp = 
                        new XmlQualifiedName("SignatureCertificate", "http://schemas.microsoft.com/BizTalk/2003/system-properties"); //Maps to BTS.SignatureCertificate
                    List<KeyValuePair<XmlQualifiedName, object>> promoteProps = new List<KeyValuePair<XmlQualifiedName, object>>();
                    promoteProps.Add(new KeyValuePair<XmlQualifiedName, object>(clientCertificateProp, thumbprint));
 
                    // Add the property collection to the request
                    //  Use the http://..../Promote to have the adapter promote the context prop
                    //  or use  http:/...../WriteToContext to just have the property written but not promoted.
                    request.Properties.Add("http://schemas.microsoft.com/BizTalk/2006/01/Adapters/WCF-properties/Promote", promoteProps); 
                }

 

Now we can do away with the custom pipeline component bits and use the OOTB XMLReceive pipeline (as it contains the party resolver component already). The certificate thumbprint will be written directly into the BTS.SigningCertificate context property (and promoted) ready for the party resolver component to use.

Nice work Thiago. thumbs_up

Monday, October 18, 2010 11:50:22 AM (AUS Eastern Standard Time, UTC+10:00)  #    - Trackback
BizTalk General | WCF
# Friday, July 30, 2010

image

Hold the phone!. Didn't I just pull that folder up in Explorer smile_sniff

image 

Ah…good old File System Redirector. Turns out that on x64 machine’s this folder is located here

C:\Windows\SysNative\AppFabric

We can browse to this location when adding references to the Windows Server AppFabric assemblies to our VS 2010 projects.

image

All is well again in the universe

Friday, July 30, 2010 4:41:06 PM (AUS Eastern Standard Time, UTC+10:00)  #    - Trackback
VS 2010 | Windows Server AppFabric
# Friday, July 09, 2010

image

Of course the official site can be found here > http://australia.msteched.com/

Friday, July 09, 2010 3:42:52 PM (AUS Eastern Standard Time, UTC+10:00)  #    - Trackback
Humour

I have been avoiding this for sometime now. That is, adding new activity items to the current BAM deployment in production. Production has been running for months now and in this high volume system we partition the BAM activities every week and archive each month (giving the client a rolling month worth of activity data). I was concerned that during the update of the BAM definition this data was going to be blown away (an experience that has caused much embarrassment in the past).

So the procedure I used this time did the trick…well almost

  • Took a “backup” of the current BAM definition using BM.exe

    bm.exe get-config -FileName:MyConfig.xml

  • Added the new activity items using Excel and edited the views
  • Exported the new BAM definition to xml
  • Removed the existing views using BM.exe

    bm.exe remove-view -Name:MyView

  • Deployed the new definition using BM.exe and the update-all command – FAILED smile_cry

    bm.exe update-all -DefinitionFile:MyNewDef.xml
     
    The error message in the command window was:
    All queries combined using a UNION, INTERSECT or EXCEPT operator must have an 
    equal number of expressions in their target lists.
     
    Upon investigation, I found that the partition tables did not get updated with the new activity items. As the view spans both the current activity tables and all the partition tables the view creation failed. Interestingly, the BAM Archive tables did get updated.

  • “Upgraded” the partition tables using the script from this blog post

    I did need to make a slight change to avoid some errors that cropped up with partition tables already archived and as such no longer present in the BAMPrimaryImport database (although the original script works).

    I changed the CURSOR definition to filter out those tables already archived:

    DECLARE partition_cursor CURSOR LOCAL FOR
    SELECT InstancesTable
    FROM [dbo].[bam_Metadata_Partitions]
    WHERE ActivityName = @activityName
    AND ArchivedTime Is Null -- Added additional filter
    ORDER BY CreationTime ASC
  • Deployed the new definition again using BM.exe and the update-all command – SUCCEEDED
  • Re-applied security to the Views using BM.exe

    bm.exe add-account -AccountName:TheStig -View:MyView

Unfortunately all my BAM Alerts got blown away smile_baringteeth . Makes sense as the alerts reference the view that was removed. Luckily taking the backup in step one allowed me to pull out the original alert definition and paste them into my new definition file. I re-deployed that using the update-all command and alerts are back to normal.

I did come across this KB 969558 article for BTS 2006 R2 that appeared to address the partition tables issue. It looks as though this did not make it into BTS 2009.

Friday, July 09, 2010 3:12:01 PM (AUS Eastern Standard Time, UTC+10:00)  #    - Trackback
BizTalk General
Navigation
Archive
<July 2011>
SunMonTueWedThuFriSat
262728293012
3456789
10111213141516
17181920212223
24252627282930
31123456
Blogroll
About the author/Disclaimer

Disclaimer
The opinions expressed herein are my own personal opinions and do not represent my employer's view in anyway.

© Copyright 2014
Breeze
Sign In
Statistics
Total Posts: 64
This Year: 0
This Month: 0
This Week: 0
Comments: 182
Themes
Pick a theme:
All Content © 2014, Breeze
DasBlog theme 'Business' created by Christoph De Baene (delarou)