Thursday, March 31, 2011

Announcing the Re-release of Exchange 2007 Service Pack 3 Update Rollup 3 (V2)

Microsoft Exchange Servicing team has fixed the reported issue and  Announcing the Re-release of Exchange 2007 Service Pack 3 Update Rollup 3 (V2) J

Team Microsoft Rock...


Free eBook for Understanding MS Virtualization Solution

Top 10 Virtualization Best Practices

Virtualization has gone from being a test lab technology to a mainstream component in datacenters and virtual desktop infrastructures. Along the way, virtualization has occasionally received a “get out of jail free” card, and has not had the same degree of efficient IT practices applied to virtual deployments as would be expected of actual physical machines. This is a mistake.
If you had an unlimited budget, would you let everyone in your organization order a new system or two and hook it up to the network? Probably not. When virtualization first appeared on the scene, unlimited and unmanaged proliferation was kept in check by the fact that there was actually a cost associated with hypervisor applications. This provided some line of defense against rogue virtual machines in your infrastructure. That is no longer the case.
There are several free hypervisor technologies available, for both Type 1 and Type 2 hypervisors. Anyone in your organization with Windows installation media and a little free time can put up a new system on your network. When virtual machines are deployed without the right team members knowing about it, that means a new system can become an unwelcome honeypot for new zero-day vulnerabilities, ready to take down other systems on your network that are business critical.
Virtual systems should never be underappreciated or taken for granted. Virtual infrastructures need to have the same best practices applied as actual physical systems. Here, we will discuss 10 key best practices that should always be on your mind when working with virtual systems.
1. Understand both the advantages and disadvantages of virtualization
Unfortunately, virtualization has become a solution for everything that ails you. To rebuild systems more rapidly, virtualize them. To make old servers new again, virtualize them. Certainly, there are many roles virtualization can and should play. However, before you migrate all your old physical systems to virtual systems, or deploy a new fleet of virtualized servers for a specific workload, you should be sure to understand the limitations and realities of virtualization in terms of CPU utilization, memory and disk.
For example, how many virtualized guests can you have on a given host, and how many CPUs or cores, RAM and disk space is each consuming? Have you taken the storage requirements into account—keeping system, data and log storage separate as you would for a physical SQL server? You also need to take backup and recovery, and failover into account. The reality is that failover technologies for virtual systems are in many ways just as powerful and flexible as failover for physical systems, perhaps even more. It truly depends on the host hardware, storage and—most of all—the hypervisor technology being used.
2. Understand the different performance bottlenecks of different system roles
You have to take into account the role each virtual system plays when deploying them, just as with physical servers. When building out servers to be SQL, Exchange or IIS servers, you wouldn’t use the exact same configuration for each one. The CPU, disk and storage requirements are extremely different. When scoping out configurations for virtual systems, you need to take the same design approach as with your physical system deployments. With virtual guests, this means taking time to understand your server and storage options, and over-burdening a host with too many guests, or setting up conflicting workloads where the CPU and disk may be at odds.
3. You can’t over-prioritize the management, patching and security of virtual systems
Two new virus outbreaks have hit in just this past week alone. The reality is that far too many virtual systems are not patched, patched late, not properly managed or ignored from a security policy perspective. Recent studies point to the significant blame that USB flash drives have to bear for the spread of viruses—especially targeted threats. The reality is that too many physical systems are un-patched and unsecure. Virtual systems—especially rogue systems—pose an even larger threat. The ability to undo system changes adds to the problem, given it makes removal of patches and security signatures far too easy—even if unintentional. Limit the proliferation of virtual machines, and make sure to include all virtual machines in your patching, management and security policy infrastructures.
4. Don’t treat virtual systems any differently than physical systems unless absolutely necessary
The last point should have begun the thought process, but it bears repeating. You shouldn’t treat virtual systems any different than physical ones. In fact, when it comes to rogue systems, you may well want to treat them as hostile. They can become the bridge that malware uses to infiltrate your network.
5. Backup early, backup often
Virtual systems, as with physical systems, should be included in your backup regimen. You can back up the entire virtual machine or the data it contains. The latter approach may be far more valuable and far more flexible. Backing up an entire virtual machine takes considerable time and gives you few options for rapid recovery. Just as you protect your mission-critical physical systems, make sure you have the capability to recover rapidly and reliably as well. It’s all too often that systems are backed up, but not verified, which results in no backup at all.
6. Be careful when using any “undo” technology
Virtual technologies often include “undo” technology. Use this very carefully. This is another reason to be certain all virtual systems are included in your IT governance work. It’s far too easy to have a disk revert back a day or a week. This could re-expose any vulnerability you just rushed out to patch, and become the gateway to infecting the rest of your network.
7. Understand your failover and your scale-up strategy
Virtualization is often touted as the vehicle to achieve perfect failover and perfect scale-up. This depends entirely on your host hardware, hypervisor, network and storage. You should work with all your vendors to understand how well each role you’ve virtualized can scale per server guest. You also need to know how well it can failover; specifically, how long guests may be unavailable during a failover, and what their responsiveness and availability may be during the switch.
8. Control virtual machine proliferation
This is a critical aspect, yet one of the hardest to enforce. There are several hypervisors that are completely free, and even with a commercial hypervisor, it’s far too easy to “clone” a guest. This can result in a multitude of problems:
·         Security:  New systems or errantly cloned systems can result in systems that are improperly secured or cause conflicts with the system from which it was “cloned.”
·         Management: Conflicts from cloning can lead to systems that are not managed according to policy, aren’t patched, and result in conflicts or instability.
·         Legal: Until recently, Windows couldn’t always determine it was being virtualized or, more importantly, that it had been silently duplicated as a new guest (once or many times). All too often, there has been a proliferation of guests due to the ease of duplication, and a more laissez-faire attitude toward piracy. This is a dangerous attitude, and should be something your IT organization blocks using polity at a minimum.
It’s too easy to clone systems. Make sure your IT organization knows the risks of undue guest duplication. Only deploy new virtual machines in compliance with the same policies you would for physical systems.
9. Centralize your storage
A leading cause of virtual machine proliferation is hosts that are physically spread throughout your organization. If you saw an employee walk up to a physical server with an external hard disk and a CD, you might wonder what was going on. With virtual systems, copying the entire guest (or two) off is entirely too easy. This ease of duplication is a key reason for virtual machine proliferation. This can also result in data loss. If you can’t physically secure your virtual machines, they should have their virtual or physical disks encrypted to ensure no loss of confidential data. By placing your virtual machine hosts and storage in central, secure locations, you can minimize both proliferation and the potential for data loss.
10. Understand your security perimeter
Whether you’re developing software or managing systems, security should be a part of your daily strategy. As you consider how to manage and patch your physical systems, always include virtual systems as well. If you’re deploying password policies, are they being enforced on your virtual systems as well? The risk is there—make sure you’re prepared to answer how virtual systems will be governed, so the risk of them being cloned can be mitigated. You need to treat virtual machines as hostile, unless they’re a part of your IT governance plan. Many hypervisors now include either a free version or trial version of antivirus software, due to the potential for security threats to cross between host and guests.
Here Now and Here for the Future
Virtualization promises to become an even more significant IT component in the future. The best thing you can do is to find a way to work with it and manage it today, rather than ignoring it and hoping it will manage itself. You need to enforce the same policies for your VMs that you enforce for your physical systems. Know where virtualization is used in your organization, and highlight the risks to your team of treating virtual machines any differently from physical systems

Wednesday, March 30, 2011

Potential for database corruption as a result of installing Exchange 2007 SP3 RU3

Over the weekend, the Exchange Product Group was made aware of an issue which may lead to database corruption if you are running Exchange 2007 Service Pack 3 with Update Rollup 3 (Exchange 2007 SP3 RU3). Specifically, the issue was introduced in Exchange 2007 SP3 RU3 by a change in how the database is grown during transaction log replay when new data is written to the database file and there are no available free pages to be consumed.
This issue is of specific concern in two scenarios: 1) when transaction log replay is performed by the Replication Service as part of ensuring the passive database copy is up-to-date and/or 2) when a database is not cleanly shut down and recovery occurs.
While only a small number of customers have been affected to date, we believe the risk is significant enough that we are recommending all customers to uninstall Exchange 2007 SP3 RU3 on all Mailbox Servers and Transport servers. Uninstalling the rollup will revert the system back to the previously installed version. We have also removed the Exchange 2007 SP3 RU3 download from the Microsoft Download Center and from Microsoft Update until we are able to produce a new version of the rollup.

More information refer the blog.


Friday, March 25, 2011

VMware pitches alternative to Microsoft Exchange 2010 DAGs


VMware HA monitors the virtual machine (VM) and can trigger a restart on another node in an ESX cluster should there be a failure. DAGs, in the meantime, work at the application level and allow for Exchange database replication, to protect application data in the event of a failure and to trigger failover should a failure occur at the app level rather than in the VM.

Thursday, March 17, 2011

How to Import PST Files into Mailboxes with Exchange 2010 SP1

Exchange Server 2010 Service Pack 1 introduced a new method for exporting mailboxes called Mailbox Export Requests.  This new method replaces the previous Export-Mailbox command.
Before we look at how to create a new mailbox export request in Exchange 2010 SP1 there are a few things that you should understand.

http://exchangeserverpro.com/export-mailboxes-exchange-server-2010-sp1

Tuesday, March 15, 2011

How to Export Mailboxes with Exchange Server 2010 SP1

Exchange Server 2010 Service Pack 1 introduced a new method for exporting mailboxes called Mailbox Export Requests.  This new method replaces the previous Export-Mailbox command.
Before we look at how to create a new mailbox export request in Exchange 2010 SP1 there are a few things that you should understand.
Firstly, no accounts are granted the rights to export mailboxes by default.  You need to explicitly grant these rights, even to accounts that are organization administrators.
Secondly, the mailbox export request is processed by the Client Access server role.  Because multiple Client Access servers can exist in a site the request could be processed by any one of them.  To ensure that the path to the export PST file is valid for any Client Access server it has to be a UNC path to network share, not a local path.

http://exchangeserverpro.com/export-mailboxes-exchange-server-2010-sp1?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+ExchangeServerPro+%28Exchange+Server+Pro%29&utm_content=Twitter

Exchange 2010 SP1 Rollup 3 and BlackBerrys sending duplicate messages: We have received notification of an issue

We have received notification of an issue impacting some customers which have RIM BlackBerry devices connecting to an Exchange 2010 SP1 RU3 environment. At this stage we are actively working with RIM to identify the exact scenarios in which customers are reporting this issue in order to narrow down the root cause of the problem and identify a suitable resolution for it.

http://blogs.technet.com/b/exchange/archive/2011/03/14/exchange-2010-sp1-rollup-3-and-blackberrys-sending-duplicate-messages.aspx

Monday, March 14, 2011

New Exchange 2010 SP1 rollup key to DAG and OWA installs

Earlier today the Exchange CXP team released the following Update Rollups for Exchange Server 2010 and 2007 to the Download Center. Release via Microsoft Update will occur on March 22nd 2011.

http://msexchangeteam.com/archive/2011/03/08/458566.aspx

Friday, March 11, 2011

Back Pressure Feature in Exchange Transport Servers

Introduction

Exchange 2007/2010 comes with a new feature for monitoring the resources on the transport servers known as Back Pressure. The feature runs only on hub and edge transport servers. Exchange Transport Service is responsible for the running it.

Features
The following resources are closely monitored by the Back Pressure feature.
1. The available disk space on the drive that has the transport database (Mail.que).
2. The available disk space on the drive that has the transport database log files.
3. Memory used by all processes
4. Memory used by EdgeTransport.exe process.
5. Number of uncommitted transport database transactions  in the memory, known as Version Buckets.

Configuring Back Pressure
There are three levels for the status of these counters – Normal, Medium and High. Each of the levels has pre-defined threshold values for each counter that is being monitored.
Typical symptoms of Back Pressure feature being kicked in is when you have emails stuck in the Drafts folder or get a “4.3.1 Insufficient System Resources” NDR from exchange.
For example, the available disk space on the drive that has the transport database should be at least 500mb. If it goes below that level, the transport servers stop sending and receiving emails. The available space threshold was 4 GB for pre SP1 servers, which was bit of overkill. Similarly, all resources that are monitored has a pre-defined threshold limits.
Can we change any of these settings? Of course you can! All settings related to the back pressure feature is stored in edgeTransport.exe.config file which is located in the bin directory. By default, it is in C:\Program Files\Microsoft\Exchange Server\Bin. Open the config file in notepad and have a look at the settings & change any if needed. The Transport Service will have to be restarted for the changes to take effect.
Can we disable Back Pressure feature altogether? And the answer is yes! You can edit the config file with the entry “false” for , save the file and restart the transport service. Job done! It is not recommended to disable the feature as it will be useful for to know when resources are running out. If you want some time to sort out the resource issues like increasing the disk space available but keep the server operational at the same time, disable the feature, fix the resource issue and enable it again.
Can we change the resource monitoring interval? The default interval is 2 seconds. You can modify the value with anything between 1 and 30 seconds. Edi the config file with the value of your choice for the entry, save the config and restart the transport server.
An important point to note is that the back pressure feature is only available for hub and edge transport servers. Check this article for more information regarding changing the back pressure settings.

How does Back Pressure work


The Back Pressure feature uses 5 (five) stages to control the monitored resources. This means that the services will not be stopped at the first sign of trouble. For each stage certain actions that will be performed when Back pressure identifies a resource bottleneck. If there is no result at the current level and resource utilization is still increasing, the stage level will increase until the system resources come back to normal values. By default the levels are checked using the interval defined in the ResourceMonitoringInterval parameter of the EdgeTransport.exe.config file.
Below is a brief description of each stage. Remember that each stage checks their information against the parameters of the configuration file.
Stage 1The memory utilized by the Edge Transport.exe process is validated. If higher than specified in the configuration file a process called Garbage collection will start. This process checks for unused objects that exist in memory, and removes those unused objects from memory.
Stage 2The number of uncommitted message queue database transactions that exist in memory is validated. If higher than the configured value, an attempt is made to force the message queue database transactions that are in memory to be written in the transaction log files.
Stage 3The utilization levels of all monitored resources are checked against the configuration file for normal levels of utilization. If overutilization persists; the resource with the highest level of utilization is acted upon. These actions are different in Edge Transport and Hub Transport, and you can see the action in the following tables:
Resource utilization level Connections from other Hub Transport servers Connections from other messaging servers Store driver connections from Mailbox servers Pickup directory and Replay directory submission Internal mail flow
MediumAllowedRejectedAllowedRejectedFunctional
HighRejectedRejectedRejectedRejectedNot functional

Hub Transport

Resource utilization level Connections from Hub Transport servers Connections from other messaging servers Pickup directory and Replay directory submission
MediumRejectedRejectedRejected
HighRejectedRejectedRejected
Edge Transport

Stage 4the memory utilization of the Exchange Transport process is validated against the config file. Even if we restart the Exchange Transport service the messages located in the Submission queue will not be processed automatically when the service starts. Other validation occurs in the message queue database transactions that are kept in memory. If higher than the normal level the following actions will occur: the transport dumpster will be disabled; message delivery to any remote destination that uses a remote delivery queue will be disabled.
Stage 5If the memory utilization of the Exchange Transport process is still at a high or medium level or the memory utilization of all processes exceeds the value defined in the configuration file, the following actions will occur: DNS cache will be flushed from memory; and the message dehydration process occurs.

References: http://technet.microsoft.com/en-us/library/bb201658.aspx