TechEd 2013′s Hands-on-labs are better than ever!

TechEd 2013′s Hands On Labs (HOL)

If you’re heading to New Orleans this year for TechEd 2013, be certain to stop by and put some time in at the Hands On Labs. I can say without exception that this year’s labs are the best I have seen.  You can work on Exchange technologies such as building a DAG on Exchange 2013.  Or you can set up an entire multi-server Lync 2013 environment that is integrated with an existing Lync 2010 infrastructure.  I only wish the labs were open 24 hours a day so that I could attend sessions AND work through every lab.

TechEd 2013 Staff Member

I was chosen again this year to work as TechEd 2013 staff as a Technical Learning Guide in the Exchange, Office 365 and Lync areas of the Hands on labs.  These are coveted spots as there are just 56 staff positions for the approximately  17,000+ current MCTs.  I’m sure it helped that I am both a Microsoft Certified Trainer and a Microsoft Certified Master on Exchange Server.  While the number of Exchange Masters and Rangers is insanely small, there are even fewer current MCMs who are also MCTs.  Having said that, the positions are filled by the best people for the job so letters alone will not get you in.

If you’re looking to stop by and see me, I’ll be on the floor most afternoons in the Exchange and Lync sections of the HOL area.

TechEd 2013 Technical Learning Guide TLG Logo

Official TechEd 2013 Technical Learning Guide




Posted in Uncategorized | Tagged , , , , , | Comments closed

OWA Offline Mode in Exchange 2013

I’ve had a few questions recently on Outlook Web App offline mode functionality on Exchange Server 2013.  As a general statement on my opinion, OWA 2013 Offline mode is a fantastic addition to Exchange 2013.  OWA offline mode is made possible by the HTML 5 offline standard.  While Outlook Web App Offline mode is a great addition, it doesn’t necessarily replace a full client for offline usage.

OWA Offline Mode Browser Requirements

At this time, OWA 2013 supports offline mode on:

  • Internet Explorer 10
  • Safari 5 or greater on OS X
  • Google Chrome v24 or later.

Outlook Web App Offline Mode Limitations

Outlook Web App offline mode has the following limitations:

  • Email from the last 3 days, up to 150 total items will be available offline.
  • Your calendar will contain the previous month and the next year of appointments.
  • Calendar reminders will exist for a limited period of time.  If a computer is offline for a long period of time calendar reminders would no longer appear.
  • Archives are online only, and as such do not exist offline.
  • While contacts from your Exchange Server account are available offline, the auto-complete cache does not work.
  • Searching and sorting may not fully function
  • Attachments will not be available offline.

To me, the single largest end-user issue is the lack of attachments offline.  Be sure to instruct users to download attachments before hopping on that plane if they intend to review documents while traveling.

Come see me at TechEd 2013 as a hands on guide in the Exchange 2013 and Lync 2013 labs.

I will be working as a staff member at TechEd 2013 in the capacity of a hands on guide in the Exchange 2013 and Lync 2013 labs.  Microsoft Certified Master at TechEd 2013

Posted in Exchange 2013, Office 365 | Tagged , , , , , , | Comments closed

So what is a vTSP or V-TSP?

My email signature contains just a few of the letters in my professional alphabet soup.  Obviously I promote the fact that I am a certified master on Exchange. And I do like to teach when I have time, so I have MCT in my signature.  But the one that gathers most of the attention is the V-TSP / vTSP. (You will see this written both ways.)

It takes a lot to earn your MCM, yet the number one question I get – what is a V-TSP?

Here is the answer:  A Virtual Technology Solutions Professional (V-TSP) is an employee of a Microsoft Partner who is acting as an extension of the Microsoft team in the capacity of pre-sales technical support.  As a V-TSP, I have ability to position, demonstrate, design and implement Microsoft solutions.

Why is it a benefit to clients that I am a V-TSP?  Honestly, Microsoft provides us significant access as members of the V-TSP program.  A few examples:

  • We’re given Microsoft network credentials, email address and VPN access.  We can use these to gain additional access to Microsoft internal resources (similar to what’s available to full time employees).
  • Access to guidance on technical learning plans, solution training, advanced technical training material and Pilot/PoC/ADS training material.
  • Tighter integration with product teams.
  • Additional training outside of what is available to other partners or the public.

Between the access granted to the Exchange Ranger / Master community via completing my MCM, and the benefits granted through the V-TSP program, I have access to a wealth of knowledge that I did not have even one year ago.

Special Thanks to Microsoft for creating the V-TSP program.  It really is empowering to the program members.

Posted in Uncategorized | Tagged , , , | Comments closed

5.4.6 NDR Exchange 2003 in a Hybrid Configuration with Exchange 2010 and Office 365

We are currently performing a migration from a troubled Exchange 2003 infrastructure to Microsoft Office 365.  After some discussion we determined that it would be prudent to implement Exchange 2010 in a hybrid configuration with Office 365.  Overall the implementation went smooth and we were linked with Office 365 in no time.

I did run into one problem in the implementation.  When I went to test mailflow from an on-premise account to the cloud I got an NDR:

Cloud Email Test
A problem occurred during the delivery of this message. Please try to resend the message later. If the problem continues, contact your helpdesk.
Diagnostic information for administrators:
Generating server:
#< #5.4.6> #SMTP#

A 5.4.6 error on Exchange 2003 lets us know that the categorizer detected a message loop in delivery.  At first I started to dig into Exchange 2003 using tools like WinRoute to see what was wrong.  I could see the send connector for our cloud tenant domain and I just couldn’t easily identify the source of the problem.  Others have had issues with SMTP connectors that cause loops, but that was not the case in this organization.  And then I finally realized what the source of the problem was – Exchange 2010 is smarter than Exchange 2003.  There’s some common sense, eh?

When you migrate an account from on premise Exchange to Office 365, a mail-enabled user is left on premise and that mail enabled user object has a target address or external email address of  When you run the Exchange 2010 SP2 hybrid configuration wizard, it adds your tenant mail-flow domain as authoritative for your organization.  By definition, an authoritative domain means that the domain name in question can only exist in this Exchange organization.  If it does not exist then a NDR should be generated.

So we have mail enabled users, with EXTERNAL email addresses that can only belong to this Exchange organization.  Sounds like a recipe for disaster, or at least NDRs, correct?

Well…not exactly.  Exchange 2010 is smarter than that.  When the categorizer on Exchange 2010 finds a target address on a user account, it will process that target address and it WILL send the message out of the organization as long as a send connector exists.  This is useful in split forest scenarios, consolidation scenarios and Office 365 Hybrid scenarios.

The problem here in this particular case is that Exchange 2003 is not so smart.  If a domain can only exist in this organization, then it will not use a send connector to route mail out of the organization.  As experienced in my case, it will provide a 5.4.6 NDR.  The fix is simple, set the SMTP address space to Internal Relay instead of Authoritative.  Internal relay domains can exist in this Exchange organization, or in another exchange organization.  As such, Exchange 2003 will send the message via the path we prefer – through the Exchange 2010 server using the TLS secured connection.

Posted in Exchange 2010, Office 365, Uncategorized | Comments closed

It’s official – I am a Microsoft Certified Master on Exchange Server!

Most of those that know me are aware that I spent the last 3 weeks in Redmond attending rotation 14 of the Microsoft Certified Master training on Exchange Server.  The last 3 weeks were absolutely grueling.  We were in class 7 days a week, for 3 straight weeks.  The very first day we started out lecture with Brian Reid who teaches transport.  Brian knows his stuff and fills your brain until it pours out of your ears.  His lecture on the first day ended at 9:33PM.  And we’re not talking any standard lecture here – there’s a lecture, and then there’s a Brian Reid Lecture.

The knowledge is beyond anything you can find on the internet, and I will be significantly better at my job as a result of what I learned.  But the problem becomes – how do you study for a final exam and a qualification lab?  I had thousands of slides, 400 pages of handwritten notes and 20 straight days of eye-strain induced headaches.  Whatever you do,  it’s likely that when you’re done,  you will spend a lot of time second guessing the answers you gave on the exam or the qual lab, and wondering if you could have done it better. But all one can do is wait for the results.

Well, the waiting is over for me – as of 44 minutes ago, I am officially a Microsoft Certified Master on Exchange Server!

Posted in Uncategorized | Tagged , | Comments closed

Using the Admin Account Pool in Notes Migrator for Exchange 4.6 (Part 1)

I am currently working on a migration from Lotus Notes to Microsoft’s Office 365.  As many of you know, Migrating data to Office 365 can be time consuming.  Many people are not aware of why this process is time consuming and they are not aware of methods for working through the migration speeds.  We recently inherited a client where the engineer setting up the software was not aware of how to resolve the throttles.

When we first inherited the project, the client had Quest’s Notes Migrator for Exchange (commonly called NME or QNME) version 4.5.4 installed.  On version 4.5.4 migrating a 1GB mailbox direct to the cloud took 10 hours.  Yes, you read that correctly, 10 hours for one mailbox and just 1GB.  It was not an internet speed issue -  the client has a 100Mb/s connection to the Internet and they have Riverbed accelerators for Office 365 in place.  The client was quite confused on the abysmal performance and they wanted to know how is this possible?

To explain this, I think it is important to point out a key fact on how access to Office 365 works – throttles are in place to protect the service’s usability for all users.  When a user account tries to perform too many actions against the Office 365 service, that user account is forbidden from executing any additional actions until that user’s “allowance” has been replenished. Put that in the context of a migration and as you can imagine the account migrating data is frequently spending its allowance.

There are several steps we can take to resolve this problem. First and foremost, you want to use a dedicated account for migrations!  If you use a multi-purpose admin account for your migrations, you will see performance drastically drop each time that admin account is used elsewhere.  The reason is simple – each account can perform only so many transactions under the throttling engine.  As you use that account elsewhere, those transactions are not available for QNME to use.  Even worse, QNME does not understand that the account is being used elsewhere.  In testing, even using the admin account for a few seconds caused NME to slow down for an extended period of time in the migration.  The second method to bypass the throttling issues is to use an Admin Account Pool under NME.  At this particular client, Quest Notes Migrator for Exchange was running version 4.5.4.  4.5.x does not have the capability to use an admin account pool.

We promptly upgraded the software to version 4.6 which gave us the ability to use an admin account pool.  An admin account pool has the following benefits:

  • Up to 100 administrative level accounts can be used simultaneously to migrate user data direct to Office 365.
  • These user accounts are automatically spread across all QNME servers that you have operating.
  • Only 1 Office 365 license is taken to operate the pool

In this environment, data migration speeds improved from a single 1GB mailbox taking 10 hours, to the average mailbox migration taking 3 hours.  Even better, we are able to operate 24 threads (concurrent migrations) on each of our NME servers.  The end result is 24 users being migrated in 3 hours per server instead of 1 user in 10 hours.  That’s an improvement of about 8000%.

In part 2 of this post we’ll review how to setup the admin account pool when your company uses federated/directory sync’ed users and single sign-on (SSO) with ADFS.  In my experience federated SSO user accounts are more complex with tools like Quest and some creative configuration is necessary.

Posted in Lotus Notes Migrations, Notes Migrator for Exchange, Office 365, Quest Notes Migrator for Exchange | Tagged , , , , , | Comments closed

Where are my Outlook 2010 log files located???

I am currently buried deep in the midst of another issue with availability lookups or free/busy between Exchange systems.  Actually, this particular issue is between Office 365 and Quest’s Coexistence Manager for Lotus Notes.   As soon as we get the problem resolved, I’ll cover the particulars of this issue in a follow up post.  That should be a useful post, as it is a really cool configuration overall.

Back to the facts -

While working through the issue with Office365 tech support, I needed to enable logging inside of Outlook 2010.  And as an evolutionary next step, I needed to send my log files to the support rep.  Unfortunately when the time came, I could not find my log files!  Everything I found kept telling me to look inside of the user/appdata/local/temp folder for the Outlook log files.

Sadly, I have been through this about 2 dozen times before.  I always end up pulling my hair out trying to figure out why my logging isn’t working.  The answer of course is that logging is working, and I simply forgot the directory that it logs to…again.  As such, I wanted to help out other people that can not find Outlook 2010′s log files.

The log files are in the %temp%/Outlook Temp directory!

Stay tuned for the next post on the Notes-to-Office365 availability resolution.


Posted in Exchange Availability Service, Exchange Calendaring | Tagged | Comments closed

Microsoft Exchange Server 2013

Microsoft Exchange is on a release cycle that has followed a 3-4 year timeline since Exchange 5.5 was released in 1997.  It should be no surprise that Exchange 2013 will be the next revision in the Exchange product line.  What was surprising is how it was announced – Microsoft announced Exchange 2013 alongside the new announcements for Office, Lync and SharePoint.  The Exchange announcement was abnormally quiet for Microsoft. I can only assume that Microsoft is saving the momentum for the Microsoft Exchange Conference in September.

The number one technical change that grabbed my attention is the new Exchange architecture.  Exchange ’13 will be brought down to two roles – a Client Access Server role and a Mailbox Server role.  Without diving into the nuances, the Mailbox Role is effectively what would be an installation on Exchange 2010 with all of the roles installed.  The Client Access Server role under Exchange 2013 acts like a stateless proxy.

If you are experienced on Exchange 2003, it seems similar to the old front-end/back-end configuration we would implement on Exchange 2003.  I agree that from a very high level it looks like Microsoft pulled Exchange 2003 out of the mothballs.  However, on a deeper inspection you will notice that the changes are designed to resolve a lot of the challenges customers face today with Exchange 2010 – challenges around load balancing and cross site failover.

You should also know that RPC access for Outlook clients is gone.  Yes, you read that correctly, no more RPC.  Clients must connect using http/https.  Outlook no longer connects to the server FQDN.  Instead AutoDiscover in Outlook creates a new “server name” comprised of mailbox GUID, @ symbol, and UPN suffix.  This enables clients to more efficiently move between servers without the issues we can experience today.  As part of this change, Outlook 2003 is no longer supported.

Are these changes a good thing?  Personally, I do not know yet.  It have to take a deeper technical dive into the new architecture before I can render an opinion.  Of course, the reality is that the changes are here and my job is to help clients understand how to implement the product in the best possible way.

You can read up on some of the technical features of Exchange 2013 here:

Posted in Exchange 15, Exchange 2013 | Comments closed

Client RPC Latency in Exchange 2010

This morning we had an issue where clients using one exchange server were experiencing tremendous latency when using Outlook, OWA or ActiveSync.  The RPC Latency should never exceed 250ms, and this server was running around 2000ms.  After some troubleshooting, I was able to clearly identify the cause of the problem for the Exchange 2010 server….  Turns out that the iSCSI interface was taking about 11 times as long as the other Exchange servers to process average requests.  The morning started with only a 200ms response time and then grew quickly as the day went on.  The other exchange servers were taking 0.115 (115ms) for the average request processing time.  This server was running 1.305 on average and had delays of up to 7.6 seconds!!  Databases do not like waiting 7.6 seconds for the disk to respond.

Turns out that the NetApp provided software had overridden the MS iSCSI mounting.  As a result, the iSCSI traffic was being sent directly out of the main network card used for all MAPI (Outlook) traffic.  That created a downward spiral with each traffic type slowing because it was fighting the other.

Since this is a 6 node DAG, we switched-over the users to another node.  Middle of the day, executives and all, no downtime.  Interestingly, the server that was now handling double load with around 2500 users was running with a 7-20ms response time for RPC traffic.

To me it proves just how powerful Exchange is at handling IO traffic.  When transferred to a healthy server, several thousand users with around 2.5TB of mail are running without any issues on iSCSI.

Posted in Exchange 2010 | Comments closed

Hosted Domain Controllers in Windows Azure

Cloud computing technologies like Windows Azure have been creating a buzz for quite a while.  Personally, I had been reluctant to jump on the bandwagon.  I do very little work on our development projects as I spend my time on things like Exchange, Lync and Active Directory.  I was aware of cloud computing’s ability to grant CPU clock cycles for offloading development or database work.  But in my opinion it was not going to make much of an impact on my world.  My opinion changed while I was working at TechEd 2012.  One of the hands on labs I had the pleasure of proctoring focused on deploying domain controllers using Windows Azure.

(Or should I say “into” Azure.  I guess at some point I’ll need to ponder the grammatical implications of Windows Azure. But for now, let’s get back to the facts!)

I was amazed by Azure’s ability to host virtual machines and I was amazed at how easy it was to deploy and configure the servers and networking.  It is amazing.  Even more amazing is the reception that clients have had when discussing Windows Azure.  I now have two pending projects where we are going to deploy a domain controller into Azure.  The clients could not be more disparate – a sprawling enterprise and a midsized company.  And they both have very different problems in their networks.  Yet, they are both looking to resolve the same issue – network reliability.  Hosting a domain controller in an external Geo-redundant platform gives both companies the same external reliability.

When you think about it, Azure is leveling the playing field in a sense.  A few years ago this capability would have been out of the financial reach of small and midsized companies.  For large companies, the cost to implement would have likely been too high to justify a single-purpose implementation.  Today, Azure allows us to click a box and decide how big each VM will be, and each company gets access to the same technological capabilities.  I think we’ll see this technology leveling the playing field in some areas as time goes on.

It’s not too often that a technology comes along that gets me excited.  In fact, the last technology that excited me was the advent of database availability groups in Exchange 2010, which I first experienced in 2009.  Windows Azure has me excited, and I can’t wait to see where it takes us!

Posted in Active Directory, Azure | Comments closed