Thursday, October 14, 2010

What's New in DNS in Windows Server 2008 R2

Domain Name System (DNS) is a system that is used in TCP/IP networks for naming computers and network services that is organized into a hierarchy of domains. DNS naming locates computers and services through user-friendly names. When a user enters a DNS name in an application, DNS services can resolve the name to other information that is associated with the name, such as an IP address.

Overview of the Improvements in DNS
The DNS Server role in Windows Server 2008 R2 contains four new or enhanced features that improve the performance of the DNS Server service or give it new abilities:

Background zone loading: DNS servers that host large DNS zones that are stored in Active Directory Domain Services (AD DS) are able to respond to client queries more quickly when they restart because zone data is now loaded in the background.
IP version 6 (IPv6) support: The DNS Server service now fully supports the longer addresses of the IPv6 specification.
Support for read-only domain controllers (RODCs): The DNS Server role in Windows Server 2008 provides primary read-only zones on RODCs.
Global single names: The GlobalNames zone provides single-label name resolution for large enterprise networks that do not deploy Windows Internet Name Service (WINS). The GlobalNames zone is useful when using DNS name suffixes to provide single-label name resolution is not practical.
Global query block list: Clients of such protocols as the Web Proxy Auto-Discovery Protocol (WPAD) and the Intra-site Automatic Tunnel Addressing Protocol (ISATAP) that rely on DNS name resolution to resolve well-known host names are vulnerable to malicious users who use dynamic update to register host computers that pose as legitimate servers. The DNS Server role in Windows Server 2008 provides a global query block list that can help reduce this vulnerability.

There are several new features in the Windows Server 2008 R2 DNS server that you can use to improve the overall security of your DNS infrastructure. These include:
• DNS Security Extensions (DNSSEC)
• Control over DNS devolution behavior
• DNS cache locking
• DNS Socket Pool

DNS Security Extensions (DNSSEC):

DNSSEC was designed to protect the Internet from certain attacks, such as DNS cache poisoning. It is a set of extensions to DNS, which provide:
a) Origin authentication of DNS data.
b) Data integrity.
c) Authenticated denial of existence.
DNSSEC introduces several new terms and technologies on both the client and server side. For example, DNSSEC adds four new DNS resource records:
• DNSKEY
• RRSIG
• NSEC
• DS

Windows Server 2008 R2 Implementation:

Windows Server 2008 R2 and Windows 7 are the first Microsoft operating systems to support DNSSEC. You can now sign and host DNSSEC signed zones to increase the level of security for your DNS infrastructure. The following DNSSEC related features are introduced in Windows Server 2008 R2:
• The ability to sign a zone (that is, to provide the zone a digital signature)
• The ability to host signed zones
• New support for the DNSSEC protocol
• New support for DNSKEY, RRSIG, NSEC, and DS resource records.

DNSSEC can add origin authority (confirmation and validation of the original of the DNS information presented to the DNS client), data integrity (provide assurance that the data has not been changed), and authenticated denial of existence to DNS (a signed response confirming that the record does not exist).
Windows 7/Server 2008 R2 DNS Client Improvements
In addition to the DNS server updates in Windows Server 2008 R2, there are some improvements in the Windows 7 DNS client. The ability to communicate awareness of DNSSEC in DNS queries (which is required if you decide to used signed zones)
• The ability to process the DNSKEY, RRSIG, NSEC, and DS resource records.
• The ability to determine if the DNS server with to which it had sent a DNS query has performed validation for the client.

DNSSEC and the NRPT :

If you’re acquainted with DirectAccess, you might be interested in the fact that DNSSEC leverages the Name Resolution Policy Table (NRPT). The DNS client DNSSEC related behavior is set by the NRPT. The NRPT enables you to create a type of policy based routing for DNS queries. For example, you can configure the NRPT to send queries for contoso.com to DNS server 1, while queries for all other domains are sent to the DNS server address configured on the DNS client’s network interface card. You configure the NRPT in Group Policy. The NRPT is also used to enable DNSSEC for defined namespaces.


Understanding how DNSSEC works :

DNSSEC works by digitally signing these records for DNS lookup using public-key cryptography. The correct DNSKEY record is authenticated via a chain of trust, starting with a set of verified public keys for the DNS root zone which is the trusted third party.
A key feature of DNSSEC is that it enables you to sign a DNS zone – which means that all the records for that zone are also signed.The DNS client can take advantage of the digital signature added to the resource records to confirm that they are valid. This is typical of what you see in other areas where you have deployed services that depend on PKI. The DNS client can validate that the response hasn’t been changed using the public/private key pair. In order to do this, the DNS client has to be configured to trust the signer of the signed zone.
The new Windows Server 2008 R2 DNSSEC support enables you to sign file-based and Active Directory integrated zones through an offline zone signing tool. I know it would have been easier to have a GUI interface for this. When configured with a trust anchor, a DNS server is able to validate DNSSEC responses received on behalf of the client. However, in order to prove that a DNS answer is correct, you need to know at least one key or DS record that is correct from sources other than the DNS. These starting points are called trust anchors.
Another change in the Windows 7 and Windows Server 2008 R2 DNS client is that it acts as a security-aware stub resolver. This means that the DNS client will let the DNS server handle the security validation tasks, but it will consume the results of the security validation efforts performed by the DNS server. The DNS clients take advantage of the NRPT to determine when they should check for validation results. After the client confirms that the response is valid, it will return the results of the DNS query to the application that triggered the initial DNS query.

Using IPsec with DNSSEC:

• DNSSEC uses SSL to secure the connection between the DNS client and server. There are two advantages of using SSL: first, it encrypts the DNS query traffic between the DNS client and DNS server, and second, it allows the DNS client to authenticate the identity of the DNS server, which helps ensure that the DNS server is a trusted machine and not a rogue.
• You need to exempt both TCP port 53 and UDP port 53 from your domain IPsec policy. The reason for this is that the domain IPsec policy will be used and DNSSEC certificate-based authentication will not be performed. The end result is that the client will fail the EKU validation and end up not trusting the DNS server.

DNS Cache Locking:

Cache locking in Windows Server 2008 R2 enables you to control the ability to overwrite information contained in the DNS cache. When DNS cache locking is turned on, the DNS server will not allow cached records to be overwritten for the duration of the time to live (TTL) value. This helps protect your DNS server from cache poisoning. You can also customize the settings used for cache locking.
When a DNS server configured to perform recursion receives a DNS request, it caches the results of the DNS query before returning the information to the machine that sent the request. Like all caching solutions, the goal is to enable the DNS server to provide information from the cache with subsequent requests, so that it won’t have to take the time to repeat the query. The DNS server keeps the information in the DNS server cache for a period of time defined by the TTL on the resource record. However, it is possible for information in the cache to be overwritten if new information about that resource record is received by the DNS server. One scenario where this might happen is when an attacker attempts to poison your DNS cache. If the attacker is successful, the poisoned cache might return false information to DNS clients and send the clients to servers owned by the attacker.
Cache locking is configured as a percentage of the TTL. For example, if the cache locking value is set to 25, then the DNS server will not overwrite a cached entry until 25% of the time defined by the TTL for the resource record has passed. The default value is 100, which means that the entire TTL must pass before the cached record can be updated. The cache locking value is stored in theCacheLockingPercent registry key. If the registry key is not present, then the DNS server will use the default cache locking value of 100. The preferred method of configuring the cache locking value is through the dnscmd command line tool.

Swimming in the Windows Server 2008 R2 DNS Socket Pool :

You can’t swim in a socket pool. But what you can do with the Windows Server 2008 R2 DNS socket pool is enable the DNS server to use source port randomization when issuing DNS queries. Because the source port randomization provides protection against some types of cache poisoning attacks. The initial fix included some default settings, but with Windows Server 2008 R2 you can customize socket pool settings. Source port randomization protects against DNS cache poisoning attacks. With source port randomization, the DNS server will randomly pick a source port from a pool of available sockets that it opens when the service starts. This helps prevent an unauthenticated remote attacker from sending specially crafted responses to DNS requests in order to poison the DNS cache and forward traffic to locations that are under the control of an attacker.
The socket pool starts with a default of 2500 sockets. However, if you want to make things even tougher for attackers, you can increase it up to a value of 10,000. The more sockets you have available in the pool, the harder it’s going to be to guess which socket is going to be used, thus frustrating the cache poisoning attacker. On the other hand, you can configure the pool value to be zero. In that case, you’ll end up with a single socket value that will be used for DNS queries, something you really don’t want to do. You can even configure certain ports to be excluded from the pool.
Like the DNS cache feature, you configure the socket pool using the dnscmd tool. The figure below shows you an example using the default values.

Friday, October 8, 2010

DHCP PROCESS Summary

Why do we need to know the DHCP lease Process? You are doing troubleshooting why client is not getting IP address from DHCP server and you forgot to see the ports 67 and 68 was blocked. In any case knowing your stuff is going to make you smart and stronger. I recommend keep memorizing below simple process just in case if you ever been any of the situations I have speculated.
In TCP/IP little world a device cannot communicate to any other device unless it has an IP address. Now think about, XP client who has not have any IP address yet, able to locate the DHCP server and ask for an IP address. IN logic world you should say, hey wait a second how come client even can talk to DHCP server, client does not have any IP Address yet.

• At the time of the lease request, the client doesn't know what its IP address is, nor does it know the IP address of the server. To work around this, below how client is able to talk to a DHCP server
• Client uses 0.0.0.0 as its address and assumes 255.255.255.255 for the server's address.
• DHCP discover message on UDP port 68 and destination port 67.
• The discover message contains the hardware MAC address and NetBIOS name of the client.
• Once the first discover message is sent, the client waits 1 second for an offer. If no DHCP server responds within that time, the client repeats its request four more times at 2-, 4-, 8-, and 16 second intervals, if the client still doesn't get a response, it will revert to Automatic Private IP Addressing (APIPA) and Continue to broadcast discover messages every 5 minutes until it gets an answer. With APIPA, (169.254.X.Y) The Windows client will automatically pick what it thinks is an unused address.

DHCP lease is 4 way Process as listed below.
• DHCP – discovery (The discover message contains the hardware MAC address and NetBIOS name of the client.)
• DHCP - Lease offer
• DHCP - lease request
• DHCP- Lease acknowledgment
To keep memorize the process use simple map below
• DD ( DHCP Discovery)
• LO (Lease Offer)
• LR (Lease Request)
• LA (Lease Acknowledgment)

Active Directory for daily operations

Below are some useful Shortcut keys for managing Active Directory for daily operations. I use most of these tools to perform administrator daily task. Knowing these shortcuts for sure is a good thing. Also Check the link on the bottom "Active Directory Product Operations Guide" I found the link incredibly useful.

• dnsmgmt.msc (DNS Manager)
• domain.msc (Active Directory domains and trusts)
• schmmgmt.msc (Active Directory Schema snap-in)
• dssit.msc (Active Directory Sites and Services)
• dsa.msc (Active Directory Users and Computers)
• DCPromo (Active Directory Installation Wizard)

Dcdiag.exe (This command line tool analyzes the state of domain controllers in the forest or enterprise and reports any problems to assist in troubleshooting.

adsiedit.msc (Used for editing Active Directory to add, delete, or move objects within the directory.)
• Netdiag.exe
(Helps isolate networking and connectivity problems by performing a series of tests to determine the state of the network client.)
• Netdom.exe
• Ntdsutil.exe (Used to perform database maintenance of Active Directory, manage and control single master operations, and remove metadata left behind by domain controllers that were removed from the network without being properly uninstalled.)

• Repadmin.exe (diagnose replication problems between domain controllers.)

Does the LSASS.EXE have enough memory, on your Domain Controller?

The Key performance of the DC (Domain Controller) is the how much of the database can be cached into the memory. The process is responsible from this task is the LSAAA.EXE caching mechanism, releases cache to free memory when OS requires it. The Domain controller who are not strong enough (low memory) will not be able cache as much and this will be noticeable performance issue on the Domain controller. Therefore it is a good idea to make sure the DC's have enough memory installed on them and the other processes are not eating up from DC memory.

The core Process LSASS.EXE is also responsible from replication, authentication, Net logon, and KCC. If the LSASS is not happy this is going to cause Busy and tired DC (Domain Controller). Any other process other than LSASS MUST is investigated on the domain controllers if they are utilizing most of the CPU resources on a Domain Controller.

The similar behavior in Exchange is the Store.exe if you remember.

What is LSASS.EXE, The LSAS management of local security authority domain authentication and Active Directory Management?

The Lsass.exe process is responsible for management of local security authority domain authentication and Active Directory management. This process handles authentication for both the client and the server, and it also governs the Active Directory engine. The Lsass.exe process is responsible for the following components:
• Local Security Authority
• Net Logon service
• Security Accounts Manager service
• LSA Server service
• Secure Sockets Layer (SSL)
• Kerberos v5 authentication protocol
• NTLM authentication protocol

Lsass.exe usually uses 100 MB to 300 MB of memory. Lsass.exe uses the same amount of memory no matter how much RAM is installed in the computer. However, when a larger amount of RAM is installed, Lsass can use more RAM and less virtual memory.

Try to use Server Performance Advisor V1.0 this is FREE Utility from Microsoft. Service Performance Advisor is a server performance diagnostic tool developed to diagnose root causes of performance problems in a Windows Server™ 2003 operating system

The KKC (Knowledge Consistency Checker)


The KKC (Knowledge Consistency Checker) is a build in process which creates the replication topology in active directory Forest. By default the KCC runs every 15 minute intervals and dictates the replication routes from a domain controller to another DC. To make it simpler, if you have a domain controller in site-B and you have created a user here. The user object is going to be added .DIT database on this domain controller. IF there is a domain controller on Site-A and they are not able to see the user object created on Site-B, this is because the replication is not happening form Site-B domain controller to the Site-A domain controller. There might ne number of different reasons why KCC cannot or don't want to create the KCC connection from site-B to Site-A. Thumb of rule is the figured out what culprit is.
Creating manual connections might save the day. The issue regarding AD replication might be connected to Exchange. A user go created the RUS is not stamping the user; therefore SMTP Proxy address never gets generated.
Note: Microsoft does not recommend creating manual connections, since KCC is automated process and design to figure out the best path for replication, Microsoft recommends.
Create a manual connection
To create a manual connection goes to site and services, Extend Site, click server object, select NTDS settings
  • Make a right click
  • New active directory connection
  • Select a domain controller from the list, click ok and finish.
Wait for changes gets replication in the AD topology. On the connector and choose replicate now.
The Purpose of KCC
Data integrity is maintained by tracking changes on each domain controller and updating other domain controllers in a systematic way. Active Directory replication uses a connection topology that is created automatically, which makes optimal use of beneficial network connections and frees the administrators from having to make such decisions.
What replicates with KCC?
  • Each combination of directory partitions that must be replicated
  • Domain controllers that store the same domain directory partition must have connections to each other
  • all domain controllers must be able to replicate the schema and configuration directory partitions
The routes for the following combinations of directory partitions are aggregated to arrive at the overall topology
  • Configuration and schema within a site.
  • Each domain directory partition within a site.
  • Global Catalog read-only, partial directory partitions within a site.
  • Configuration and schema between sites.
  • Each domain directory partition between sites.
  • Global Catalog read-only, partial directory partitions between sites.
Terminology with KCC
  • KCC runs every 15 minutes.
  • The domain controllers that replicate directly with each other are called replication partners
  • these partnerships are added, removed, or modified automatically, as necessary, on the basis of what domain controllers are available and how close they are to each other on the network
  • KCC creates connections that enable domain controllers to replicate with each other
  • A connection defines a one-way, inbound route
  • Connection objects are created automatically by the KCC; they can also be created manually.

  • Site Links
For replication to occur between two sites, a link must be established between the sites. Site links are not generated automatically and can be created in Active Directory Sites and Services. Unless a site link is in place, the KCC cannot create connections automatically between computers in the two sites, and replication between the sites cannot take place. Each site link contains the schedule that determines when replication can occur between the sites that it connects. The Active Directory Sites and Services user interface guarantees that every site is placed in at least one site link. A site link can contain more than two sites, in which case all the sites are equally well connected
  • Bridgehead Servers
To communicate across site links, the KCC automatically designates a single server, called the bridgehead server, in each site to perform site-to-site replication. Subsequent replication occurs by replication within a site. When you establish site links, you can designate the bridgehead servers that you want to receive replication between sites. By designating a specific server to receive replication between sites, rather than using any available server, you can specify the most beneficial conditions for the connection between sites. Bridgehead servers ensure that most replication occurs within sites rather than between sites.

WHAT IS THOMBSTONE PROCESS SERVER 2008


The tombstone lifetime in an Active Directory forest determines how long a deleted object (called a “tombstone”) is retained in Active Directory Domain Services (AD DS). The tombstone lifetime is determined by the value of the tombstoneLifetime attribute on the Directory Service object in the configuration directory partition.
Tombstone Process in a basic way
  • Object got deleted
  • AD marks is as deleted object by setting the objects attribute called "isDeleted" to TRUE ,
  • At the same time, the AD strips most of the attributes from the object
  • Renames the object
  • Moves it to the object, to the special container in the object naming context
  • (NC) named CN= Deleted Objects
  • The object, now called a tombstone
  • Object is no longer visible from ADUC. ( administrators)
Here is the tricky part the Tombstone is visible to the Active Directory replication process. Why is that so? Remember multi-master replication model. In order to make sure the deletion is performed on all the DCs that host the object being deleted, Active Directory replicates the tombstone to the other DCs. Thus the tombstone is used to replicate the deletion throughout the Active Directory environment
The tombstone lifetime is determined by the value of the TombstoneLifetime attribute on the Directory Service object in the configuration directory partition.
  • Adsiedit
  • Configuration
  • DC name
  • CD=Configuration
  • DC=Forest domain
  • CN=Services
  • CN=Windows NT
  • Right click CN=Directory Service properties
  • The attribute name is TombstoneLifetime
On a domain controller in a forest that was created on a domain controller running Windows Server 2003 with Service Pack 1 (SP1), the default value is 180 days.
On a domain controller in a forest that was created on a domain controller running Windows 2000 Server or Windows Server 2003, the default value is 60 days.