Friday, March 04, 2005

Analysis Services Problems Part II

In my last posting I talked about the trouble that you can get yourself into trying to run PTS in poor network environment. We discussed how to get XMLA as far as the desktop however did not discuss how we can get client tools to consume this data with the minimum of hassle.

This was one of the biggest challenges that had to be overcome, I thought about writing an OLEDB provider for XMLA which the client software could use and be fooled into thinking it was running from PTS… As luck would have it, at the same time I was introduced to a company called Digital Aspects (http://www.digitalaspects.com), who had built their first version of something similar and we were keen to test it.

I could clearly see that the potential for a great tool was there, although a little rough round the edges, when used alongside compressed XMLA documents, we could see that significant performance gains would be possible if we could get the provider to cater for the many different types of queries that our users run.

Its fair to say that we embarked on a fairly long journey at this point in partnership with Digital Aspects to really come up with something that worked for us. After probably six months of versions shipping backwards and forwards, I like to think that the provider as it stands now caters for all the weird and wonderful queries that our users like to chuck at it. Of course getting the provider to perform was not the whole story, a lot of changes had to also be made to our client application Intelligent Apps to work effectively with the new provider, however the difference is staggering!

So let’s talk about performance and the improvements that we can quantify.

Connection to our main cube used to take in order of around 10-20 minutes in some locations even with all the different PTS connection properties set on the client, with the new provider around 10-20seconds is pretty representative now.

In terms of queries themselves, we are seeing on average over 500% performance increase, however in some cases reports which we could not run at all or would have taken all day are now taking 5 minutes. All in all a success story from the user perspective.

I hope this was an insight into a way of tackling performance in a large diverse org, next time I will talk about the Analysis Services locking model.

Analysis Services Problems

Have you ever had a problem with no obvious solution?

I inherited a problem like this in my current role. To quickly paint the picture, we have a large Analysis Services deployment which suffers badly from poor network performance caused mostly by the poor infrastructure and compounded by the way that MSAS uses the network.

For those of you that don’t know MSAS uses PTS. PTS is short for Pivot Table Services and is the software on the client which talks to the Analysis Services server. PTS is great when you have a pretty capable network as it performs a load of caching and delivers a pretty good experience. The first cracks in this architecture appear when you introduce latency and bandwidth into the equation, by restricting these you find that PTS becomes rapidly unworkable below speeds of 64Kbps, latency of over 200ms also has a huge impact on the user experience.

So how do we look at solving this issue without looking at SQL Server 2005 which has a completely different client model? Also purchasing a thin client like Novaview is out of the equation for various reasons.

To solve this problem, you have to tackle a few areas ….

Network Traffic

Microsoft released their XMLA SDK 1.1 kit last year; this provides the ability to ship XMLA data back to the client. At the time of writing none of the major clients can consume this natively and I suspect they will all wait for Yukon before doing so. So why would I suggest XMLA? If you look at the output it’s actually very verbose and makes network performance even slower.

The first problem is how to reduce the size of the XMLA document which is shipped back to the client, this is actually quite easy. IIS has an option which allows you to turn on compression; this means that if the server and client can negotiate a compression protocol then your data will be compressed before transmission, this results in a compression ratio of around 90%.

In reality though I found the implementation of compression in IIS to be less than reliable and restored to a third party provider (http://www.pipeboost.com). This is what we use in production today.

Great so now we can submit a query to the XMLA SDK on the server and get back a response compressed but that does not really solve any real world problems. Within our company all of our users use one of two main tools, Proclarity or IA. The next issue is how we present this data to the existing clients.

To be continued …..