Creating and Deploying Templates link to Mikelaverick.com

MikeLaverick.com post on Creating VMware Template

Advertisements

DFS Replication 2012 R2

DFS Replication in Windows Server 2012 R2 :http://blogs.technet.com/b/filecab/archive/2013/08/20/dfs-replication-in-windows-server-2012-r2-if-you-only-knew-the-power-of-the-dark-shell.aspx

DFS Replication Initial Sync in Windows Server 2012 R2:http://blogs.technet.com/b/filecab/archive/2013/08/21/dfs-replication-initial-sync-in-windows-server-2012-r2-attack-of-the-clones.aspx

DFS Replication in Windows Server 2012 R2: Restoring Conflicted, Deleted and PreExisting files with Windows PowerShell: http://blogs.technet.com/b/filecab/archive/2013/08/23/dfs-replication-in-windows-server-2012-r2-restoring-conflicted-deleted-and-preexisting-files-with-windows-powershell.aspx

Understanding DFS (how it works): http://technet.microsoft.com/en-us/library/cc782417(v=WS.10).aspx

=> Several mechanisn are used: routing, DNS, AD sites and subnets topology, WINS,  FW ports and rules shoud be open (RPC, SMB…):

NetBIOS Name Service:  Domain controllers; root servers that are not domain controllers; servers acting as link targets; client computers acting as link targets: TCP/UDP 137

NetBIOS Datagram Service: Domain controllers; root servers that are not domain controllers; servers acting as link targets; client computers acting as link targets: TCP/138

NetBIOS Session Service: Domain controllers; root servers that are not domain controllers; servers acting as link targets; client computers acting as link targets: TCP/139

LDAP Server: Domain controllers TCP/UDP 389

Remote Procedure Call (RPC) endpoint mapper: Domain controllers TCP/135

Server Message Block (SMB): Domain controllers; root servers that are not domain controllers; servers acting as link targets; client computers acting as link targets: TCP/UDP 445

Extract from the MS technet: “When a client requests a referral from a domain controller, the DFS service on the domain controller uses the site information defined in Active Directory (through the DSAddressToSiteNames API) to determine the site of the client, based on the client s IP address. DFS stores this information in the client site cache”
“DFS clients store root referrals and link referrals in the referral cache (also called the PKT cache). These referrals allow clients to access the root and links within a namespace. You can view the contents of the referral cache by using Dfsutil.exe with the /pktinfo “
“You can view the domain cache on a client computer by using the Dfsutil.exe command-line tool with the /spcinfo parameter”

Implementing DFS-R: http://technet.microsoft.com/en-us/library/cc770925.aspx AND DFS-R FAQ:http://technet.microsoft.com/en-us/library/cc773238.aspx, delegate DFS-R permissions:http://technet.microsoft.com/en-us/library/cc771465.aspx

Implementing DFS Namespace: http://technet.microsoft.com/en-us/library/cc730736.aspx AND DFS-N FAQ: http://technet.microsoft.com/fr-fr/library/ee404780(v=ws.10).aspx

Consolidation of multiple DFS namespaces in a single one :http://blogs.technet.com/b/askds/archive/2013/02/06/distributed-file-system-consolidation-of-a-standalone-namespace-to-a-domain-based-namespace.aspx

Netmon trace digest: http://blogs.technet.com/b/josebda/archive/2009/04/15/understanding-windows-server-2008-dfs-n-by-analyzing-network-traces.aspx

DFS 2008 step by step: http://technet.microsoft.com/en-us/library/cc732863(WS.10).aspx

DFS tuning and troubleshooting:

DFS-N et DFS-R en ligne de commande:http://blogcastrepository.com/blogs/benoits/archive/2009/08/22/dfs-n-et-dfs-r-en-ligne-de-commande.aspx

DFSR les commandes les plus utiles: http://www.monbloginfo.com/2011/03/02/dfsr-les-commandes-les-plus-utiles/

and http://blogs.technet.com/b/filecab/archive/2009/05/28/dfsrdiag-exe-replicationstate-what-s-dfsr-up-to.aspx

Tuning DFS: http://technet.microsoft.com/en-us/library/cc771083.aspx and Tuning DFS Replication performance : http://blogs.technet.com/b/askds/archive/2010/03/31/tuning-replication-performance-in-dfsr-especially-on-win2008-r2.aspx

DFSutil command line:  http://technet.microsoft.com/fr-fr/library/cc776211(v=ws.10).aspx ANDhttp://technet.microsoft.com/en-us/library/cc779494(v=ws.10).aspx

Performance tuning guidelines for Windows 2008 R2: http://msdn.microsoft.com/en-us/windows/hardware/gg463392.aspx

Monitoring:

DFSRMon utility: http://blogs.technet.com/b/domaineetsecurite/archive/2010/04/14/surveillez-en-temps-r-el-la-r-plication-dfsr-gr-ce-dfsrmon.aspx

or  DfsrAdmin.exe in conjunction with Scheduled Tasks to regularly generate health reports: http://go.microsoft.com/fwlink/?LinkId=74010

Server side:

DFS: some notions: A referral is an ordered list of targets that a client computer receives from a domain controller or namespace server when the user accesses a namespace root or folder with targets. After the client receives the referral, the client attempts to access the first target in the list. If the target is not available, the client attempts to access the next target.

tip1) dfsutil domain : Displays all namespaces in the domain ; dfsutil /domain:mydomain.local /view

tip2) You can check the size of an existing DFS namespace by using the following syntax in Dfsutil.exe:

dfsutil /root:\mydomain.localrootname /view (for domain-based DFS)
dfsutil /root:\dfsserverrootname /view (for stand-alone DFS)

tip3) Enabling the insite setting of a DFS server is useful when: You don’t want the DFS clients to connect outside the site.
You don’t want the DFS client to connect to a site other than the site it is in, and hence avoid using expensive WAN links.
dfsutil /insite:\mydomain.localdfsroot /enable

tip4) You want DFS clients to be able to connect outside the internal site, but you want clients to connect to the closest site first, saving the expensive network bandwidth:

ex: dfsutil /root:\mydomain.localsales /sitecosting /view or /enable or /disable

If you do not know if a root is site costing aware, you can check its status by substituting the /display parameter for the /sitecosting parameter.

tip5) Enable root scalability mode: You enable root scalability mode by using the /RootScalability parameter in Dfsutil.exe, which you can install from the SupportTools folder on the Windows Server 2003 operating system CD. When root scalability mode is enabled,  DFS root servers get updates from the closest domain controller instead of the server acting as the PDC emulator master.
As a result, root scalability mode reduces network traffic to the PDC emulator master at the expense of faster updates  to all root servers. (When you make changes to the namespace, the changes are still made on the PDC emulator master,  but the root servers no longer poll the PDC emulator master hourly for those changes; instead, they poll the closest domain controller.)
With this mode enabled, you can have as many root targets as you need, as long as the size of the DFS Active Directory object (for each root)  is less than 5 MB. Do not use root scalability mode if any of the following conditions exist in your organization:Your namespace changes frequently, and users cannot tolerate having inconsistent views of the namespace.  Domain controller replication is slow. This increases the amount of time it takes for the PDC emulator master  to replicate DFS changes to other domain controllers, which, in turn, replicate changes to the root servers.  Until this replication completes, the namespace will be inconsistent on all root servers.

ex: dfsutil /root:\mydomain.localsales /rootscalability /view or /enable or /disable

tip6) Dfsdiag utility: http://blogs.technet.com/b/filecab/archive/2008/10/24/what-does-dfsdiag-do.aspx

/testdcs: With this you can check the configuration of the domain controllers. It performs the following tests:

  • Verifies that the DFS Namespace service is running on all the DCs and its Startup Type is set to Automatic.
  • Check for the support of site-costed referrals for NETLOGON and SYSVOL.
  • Verify the consistency of site association by hostname and IP address on each DC.

To run this command against your domain mydomain.local just type:

DFSDiag /testdcs /domain:mydomain.local

DFSDiag /testdcs > dfsdiag_testdcs.txt

/testsites: Used to check the configuration of Active Directory Domain Services (AD DS) sites by verifying that servers that act as namespace servers or folder (link) targets have the same site associations on all domain controllers.

So for a machine you will be running something like: DFSDiag /testsites /machine:MyDFSServer

For a folder (link): DFSDiag /testsites /dfspath:\mydomain.localMyNamespaceMyLink/full

For a root: DFSDiag /testsites /dfspath:\mydomain.localMyNamespace /recurse /full

/testdfsconfig:  With this you can check the DFS namespace configuration. The tests that perform are:

  • Verifies that the DFS Namespace service is running and that its Startup Type is set to Automatic on all namespace servers.
  • Verifies that the DFS registry configuration is consistent among namespace servers.
  • Validates the following dependencies on clustered namespace servers that are running Windows 2008 (non supported for W2K3 clusters L):
    • Namespace root resource dependency on network name resource.
    • Network name resource dependency on IP address resource.
    • Namespace root resource dependency on physical disk resource.

To run this you just need to type:  DFSDiag /testdfsconfig /dfsroot:\mydomain.localMyNamespace

/testdfsintegrity: Used to check the namespace integrity. The tests performed are:

  • Checks for DFS metadata corruption or inconsistencies between domain controllers
  • In Windows 2008 server, validates that the Access Based Enumeration state is consistent between DFS metadata and the namespace server share.
  • Detect overlapping DFS folders (links), duplicate folders and folders with overlapping folder targets (link targets).

To check the integrity of your domain mydomain.local:

DFSDiag /testdfsintegrity /dfsroot:\mydomain.localMyNamespace

DFSDiag.exe /testdfsintegrity /dfsroot:\mydomain.localMyNamespace /recurse /full > dfsdiag_testdfsintegrity.txt

Additionally you can specify /full, /recurse, which in this case, /full verifies the consistency of share and NTFS ACLs in all the folder targets. It also verifies that the Online property is set in all the folder targets. /recurse performs the testing including the namespace interlinks.

/testreferral: Perform specific tests, depending on the type of referral being used.

  • For Trusted Domain referrals, validates that the referral list includes all trusted domains.
  • For Domain referrals, perform a DC health check as in /testdcs
  • For Sysvol and Netlogon referrals perform the validation for Domain referrals and that it’s TTL has the default value (900s).
  • For namespace root referrals, perform the validation for Domain referrals, a DFS configuration check (as in /testdfsconfig) and a Namespace integrity check (as in /testdfsintegrity).
  • For DFS folder referrals, in addition to performing the same health checks as when you specify a namesapace root, this command validates the site configuration for folder target (DFSDiag /testsites) and validates the site association of the local host

Again for your namespace mydomain.local:

DFSDiag /testreferral /dfspath:\mydomain.localMyNamespace

DFSDiag.exe /testreferral /dfspath:\mydomain.localMyNamespace /full > dfsdiag_testreferral.txt

There is also the option to use /full as an optional parameter, but this only applies to Domain and Root referrals. In these cases /full verifies the consistency of site association information between the registry and Active Directory.

Domain controllers:

Evaluate domain controller health, site configurations, FSMO ownerships, and connectivity:

Use Dcdiag.exe to check if domain controllers are functional. Review this for comprehensive details about dcdiag:

Dcdiag /v /f:Dcdiag_verbose_output.txt

Dcdiag /v /test:dns /f:DCDiag_DNS_output.txt

Dcdiag /v /test:topology /f:DCDiag_Topology_output.txt

Active Directory replication

If DCDiag finds any replication failures and you need additional details about them, Ned wrote an excellent article a while back that covers how to use the Repadmin.exe utility to validate the replication health of domain controllers:

Repadmin /replsummary * > repadmin_replsummary.txt

Repadmin /showrepl * > repadmin_showrepl.txt

Always validate the health of the environment prior to utilizing a namespace.

Clients:

  • dfsutil /root:\mydomain.localmyroot /view /verbose    ; display the content of root dfs (links…)
  • dfsutil /pktinfo     ;to display the client cache
  • dfsutil /spcinfo     ; the domain cache on a client computer
  • dfsutil /purgemupcache ; cache stores information about which redirector, such as DFS, SMB, or WebDAV, is required for each UNC path
  • dfsutil /pktflush   ; Dfsutil /PktFlush is a special problem repair command that should only be executed on the client.

The PKT Cache keeps information about referrals for previously accessed DFS paths. If any path is accessed after flushing this cache, the appropriate server(s) will be contacted again to get new referrals. A client benefits from high availability in DFS by getting a list of link target referrals within the same site as well as targets in farther sites. In some cases targets in the closer sites may be inaccessible at the beginning of the client’s use, causing the client to successfully failover to a target at a farther site. Once a closer and less expensive target is available, you would like the client to use it. If you do not want to reboot the client to cause a closer site to be selected, type the following at the command line: This command statement flushes the local partition knowledge table (PKT). This forces the client to get the referral list of the targets from the server again.  Some of the entries in the PKT may not get flushed, especially if DFS is in the process of using the referrals. Once the PKT is flushed from the client cache, the client gets a new list of referrals from the server and it surely will try accessing the closer targets.

Example:

If your support is asking you to check a problem on  root DFS or client computer: ie. \mycompany.netrootdfs

The commands I used are:

For \mycompany.netrootdfs (from admin wks):
Dfsdiag /testreferral /dfspath:\mycompany.netrootdfs    => OK
Dfsdiag /testdfsconfig /dfsroot:\mycompany.netrootdfs    => OK
Dfsdiag /testsites /dfspath:\mycompany.netrootdfs       => OK

else suspect a problem on the clients (intermittent problem of DNS or WINS or DFS cache):

Check Naming resolution with DNS
Check Naming resolution with WINS

On client PC, if problem occurs, check and flush the cache:

To check:
dfsutil /root:\mycompany.netrootdfs /view /verbose       ; display the content of root dfs (links…)
dfsutil /pktinfo                                                                                ; to display the client cache
dfsutil /spcinfo                                                                                ; the domain cache on a client computer
To flush:
dfsutil /purgemupcache                 ; this cache stores information about which redirector, such as DFS, SMB, or WebDAV, is required for each UNC path
Dfsutil /pktflush                              : This command statement flushes the local partition knowledge table (PKT).

Windows Server 2012 R2 Deduplication by Scott D. Lowe

Repost of an article on WindowsNetworking.com
Windows Server 2012 R2 Deduplication

by Scott D. Lowe [Published on 18 Sept. 2014 / Last Updated on 18 Sept. 2014]

In this article, you will learn about how to manage the deduplication feature in Windows Server 2012 R2.
Introduction

Even as the cost per GB of storage continues to drop as vendors release hard drives of massive size, customers still seek to find ways to maximize their investment in what remain expensive storage solutions. One of the most common methods by which organizations drive down the cost of their storage is by implementing deduplication in their storage environments. In this article, you will learn about how to manage this cost-saving feature in Windows Server 2012 R2.

If you pick just one feature on your Windows Server 2012 R2 server to turn on, you’ll probably want it to be the new deduplication option. Primary storage deduplication is typically left to the hardware layer and may require expensive shared storage (SAN or NAS) with that capability. Now, with Windows Server 2012 R2 you can implement this space, and money saving feature using the native controls at the filesystem.

The feature was unveiled with Windows Server 2012 originally, but in the R2 iteration there were some extra features added including the ability to extend deduplication into CSV (Clustered Shared Volumes), VHD (Hyper-V Virtual Hard Disk), plus the addition of the Expand-DataDedupFile PowerShell Cmdlet in case you need to inflate your previously deduplicated volume.

Installing and Configuring your Windows Server 2012 R2 Deduplication

The steps to deploy are very simple as you can see here. First, we enable the feature through the Add Roles and Features wizard which you will find under File and Storage Services | File and iSCSI Services | Data Deduplication:

Image
Figure 1

Once the installation wizard completes (no reboot required), you will be able to open the File and Storage Services section in Server Manager, then right-click your volume and select Configure Data Deduplication:

Image
Figure 2

The Deduplication Settings page has a simple checkbox to enable, and you have the option to select the file age for deduplication (default is 5 days) plus you can add folders to be excluded in the case where you have workloads that will not be compatible with deduplicated data.

Image
Figure 3

If you click the Set Deduplication Schedule button, you also have the option to throttle the deduplication task priority during specific times to allow for other tasks such as backups, or day-to-day production usage during business hours to take priority over the deduplication tasks.

Image
Figure 4

Now that you have enabled your deduplication at the volume, you can kick off the task manually to get things started. This is easily done using the PowerShell one-liner Start-DedupJob –Type Optimization –Volume D:

Image
Figure 5

Now our deduplication job is running in the background, and you can easily monitor the progress using the Get-DedupStatus and Get-DedupJob Cmdlets:

Image
Figure 6

Depending on the size of your volume, the process may take quite a while. Once the task has been completed, the schedule will re-run the deduplication job to check your volume against the threshold of file age every day.

Performance Concerns

One of the top concerns by Systems Administrators is the concern about server performance during deduplication, and with application performance when using data that is on a deduplicated volume.

On the sample volume for the images above, you can see that the CPU and Memory overhead is nominal, but there will be higher than normal utilization at the disk and disk controller as the first pass runs:

Image
Figure 7

Beyond the server performance, you will have to evaluate whether any issues are occurring where applications are accessing data that is on a deduplicated volume. In those cases, you can simply add the folder exclusion to your volume deduplication settings as we saw in the earlier screenshot during the initial setup.

After Deduplication

In the example volume that was used, there was a final savings of 740 GB after the deduplication job completed. The final status screen shows us how many files were optimized and which are considered to be “in policy”, meaning that they meet the criteria of being over the file age to be evaluated for deduplication.

Image
Figure 8

As you can see from this example, there was a significant savings in disk usage which will definitely turn into better efficiency with our storage footprint.

Conflicts and Challenges with Deduplication

One particular issue with deduplication on Windows Server 2012 and Windows Server 2012 R2 is the use of FSRM (File Server Resource Manager) quotas. Unfortunately, hard quotas are not supported on a volume that is running data deduplication.

This is an issue because the quota is based on actual used space in the volume which will not be represented correctly. This will result in incorrect measurement of the used space as you can imagine, so we have to rely on soft quotas only on deduplicated volumes.

Another service which cannot co-exist with data deduplication is the SIS (Single Instance Store) option which was a predecessor available on Windows Storage Server. The migration to Windows Server 2012 will take some care and feeding if you had this feature in place in the past.

One other thing to think about is your backup processes. During the process of deduplication, the archive bit is set on files which are optimized which will trigger the file to be included in incremental or differential backups. In the example of my sample volume, the incremental backup for that night included the large number of files which were affected during the deduplication process. Results may vary on how this affects your backup software depending on whether it uses the archive bit, or file checksum to mark the file for change.

Realized Savings with Deduplication

The level of space savings will vary depending on a number of factors such as file type, file age, and frequency of change. Even in cases with large binary files, there will be a space savings using this feature. Given the low impact of running the service and limited risk with other Windows services, there really is no reason why you wouldn’t want to run data deduplication at least to see what the potential win could be.

It is important to note that deduplication is not to be used to overcommit storage on a volume. This is a common issue where operational efficiency is misinterpreted as a way to save money on hardware growth. This feature is meant to reduce the utilization, but you do have to be wary of letting deduplicated volumes run at extremely high usage levels in the case that file usage. There is a chance that sudden usage and change on the filesystem could trigger inflation of the real utilized space and risk hitting the actual storage limit on the disks.

What about my Storage Hardware Deduplication?

This is a great question that comes up when we approach the idea of using OS based storage optimization. Luckily, there are no known conflicts between the Windows Server 2012 deduplication features and any of the widely available hardware level deduplication features on shared storage environments. The obvious win will be for customers who do not have the hardware based capability in their data center, and this is a particularly exciting feature for ROBO (Remote Office / Branch Office) deployments.

All in all, Microsoft has delivered a strong product with Windows Server 2012 and Server 2012 R2 deduplication and we look forward to watching more features as them come out with future releases.

DHCP: Clustering DHCP in Windows Server 2012 R2

Microsoft Step-by-Step: Configure DHCP for Failover

Microsoft: Understand and Deploy DHCP Failover

Microsoft Blog: Pierre Roman Step-by-Step: DHCP High Availability with Windows Server 2012 R2

The following is a repost of “DHCP Failover with Microsoft Server 2012 R2” from Windowsnetworking.com by Scott D. Lowe.  This guy knows his stuff.

Introduction

One of the great features in Windows Server 2012 R2 is the DHCP failover for Microsoft DHCP scopes. For those who have experienced Microsoft DHCP management in Windows 2000/2003/2008 you will recall that one of the long requested features was a true load balancing and failover option.

Prior to Windows Server 2012 the only failover option was to have full copies of the scope definitions on a secondary server with the scope disabled. You would have to manually enable to scope in the event of a failure at the primary server, but this would be time consuming and could cause IP address conflicts as machines requested new IP information from the new DHCP server.

Alternatively, you could load balance the scopes by using the same scope and gateway information, but different portions of the scope are active on each server. This can be done by using a technique known as DHCP scope splitting.

Advertisement

Load Balancing versus Failover

One of the first things that can confuse many Systems Administrators is the difference between load balancing and failover. They aren’t always mutually exclusive, so it is even more important to understand the key differences.

Load balancing is the use of an active-active configuration which shares services between multiple nodes which may be spread among remote sites for safety and redundancy. In application environments, load balancing will be configured based on a balancing algorithm that may be as simple as round robin, and as complex as route cost based on latency and response for service delivery by locality.

Failover is where the environment suffers an outage of a service which triggers the failover of that service function to a secondary server or site. The assumption for most failover configurations is that the primary server is completely unavailable. With our DHCP failover, we can actually mix the two roles of failover and load balancing by operating the scope on multiple servers across your data centers. This hybrid operational model greatly reduces the risk of service loss.

What about our DHCP Configuration on Network Equipment?

As you may already know, we require configuration at our routers, and potentially Layer 3 switches, to make our client nodes aware of the DHCP servers that are servicing the subnet. This is done with the DHCP Relay Agent, sometimes known as the DHCP Helper address.

By defining the DHCP Relay, any nodes on that switch with access to the appropriate VLAN will go through the normal process of requesting an IPv4 DHCP address. This is done with the DHCPDISCOVER request which is forwarded by the router to the DHCP server by the DHCP Relay Agent. The DHCP server returns a DHCPOFFER, followed by a DHCPREQUEST, and finally a DHCPACK confirming the IP address and lease information.

This is all good, but what happens in the case where we have more than one DHCP server, and require multiple DHCP Relay Agent addresses at the router? This is a great question, and it explains why we will really appreciate the new Windows Server 2012 R2 DHCP services.

If we have multiple 2003/2008 DHCP servers, the scopes must be disabled on the second server, otherwise both DHCP servers will be replying to the DHCPDISCOVER broadcast and may hand out the same IP address to more than one workstation since they aren’t aware of each other.

Under Windows Server 2012 R2, we have the new failover DCHP model that has the primary server actively servicing DHCP requests, and the failover instance is aware of, but not active in the process.

Let’s take a look at the configuration of a failover scope before we go any further.

Configuring a DHCP Failover Scope

There is no difference in the base configuration of a DHCP scope to prepare it for being protected with a failover scope on a secondary server. Here is a sample scope named Failover Scope A (10.20.30.0/24) that we configure just like a normal scope.

DHCP Failover with Microsoft Server 2012 R2 :: Windows Server 2012 :: Articles & Tutorials :: WindowsNetworking.com

// // // 40) f_key = f_key.substring(0,40);
var l_key = loc.substr(loc.lastIndexOf(‘/’)).toLowerCase().replace(‘.html’,”).replace(/[^a-z0-9]/g,”);
if(l_key.length > 40) l_key = l_key.substring(0,40);
window.ad_keywords = f_key + ‘,’ + l_key;
if (window.ad_keywords == ‘,’) window.ad_keywords = ‘homepage’;
if (f_key == ”) f_key = ‘homepage’;
if (l_key == ”) window.ad_keywords = f_key + ‘,categorypage’;

// ]]>//

Image

Figure 1

The IP address configuration is also done in the typical way that we have done in any DHCP server up to this point.

Image

Figure 2

Now that we have our demo scope available, we will right-click the Scope in the DHCP Manager console and select Configure Failover:

// // // 40) f_key = f_key.substring(0,40);
var l_key = loc.substr(loc.lastIndexOf(‘/’)).toLowerCase().replace(‘.html’,”).replace(/[^a-z0-9]/g,”);
if(l_key.length > 40) l_key = l_key.substring(0,40);
window.ad_keywords = f_key + ‘,’ + l_key;
if (window.ad_keywords == ‘,’) window.ad_keywords = ‘homepage’;
if (f_key == ”) f_key = ‘homepage’;
if (l_key == ”) window.ad_keywords = f_key + ‘,categorypage’;

// ]]>//

// // // 40) f_key = f_key.substring(0,40);
var l_key = loc.substr(loc.lastIndexOf(‘/’)).toLowerCase().replace(‘.html’,”).replace(/[^a-z0-9]/g,”);
if(l_key.length > 40) l_key = l_key.substring(0,40);
window.ad_keywords = f_key + ‘,’ + l_key;
if (window.ad_keywords == ‘,’) window.ad_keywords = ‘homepage’;
if (f_key == ”) f_key = ‘homepage’;
if (l_key == ”) window.ad_keywords = f_key + ‘,categorypage’;

// ]]>//

Image

Figure 3

The Failover Configuration wizard opens up with a list of scopes that are available to protect. In our sample, we have the 10.20.30.0/24 scope:

// // // 40) f_key = f_key.substring(0,40);
var l_key = loc.substr(loc.lastIndexOf(‘/’)).toLowerCase().replace(‘.html’,”).replace(/[^a-z0-9]/g,”);
if(l_key.length > 40) l_key = l_key.substring(0,40);
window.ad_keywords = f_key + ‘,’ + l_key;
if (window.ad_keywords == ‘,’) window.ad_keywords = ‘homepage’;
if (f_key == ”) f_key = ‘homepage’;
if (l_key == ”) window.ad_keywords = f_key + ‘,categorypage’;

// ]]>//

Image

Figure 4

Next up we are asked about assigning a partner server for our scope. We will be using one named PartnerServer in our example:

Image

Figure 5

By default, a name is chosen for the replication relationship name. It is ideal to pick something that will be meaningful to your team for continued management.

The parameters in our failover relationship allow us to set the failover mode which can be Active-Active (Load balance) or Active-Passive (Hot standby). In load balancing mode, there is an option to set the weight of the scope distribution at the partner. The default is a 50% split scope.

// // // 40) f_key = f_key.substring(0,40);
var l_key = loc.substr(loc.lastIndexOf(‘/’)).toLowerCase().replace(‘.html’,”).replace(/[^a-z0-9]/g,”);
if(l_key.length > 40) l_key = l_key.substring(0,40);
window.ad_keywords = f_key + ‘,’ + l_key;
if (window.ad_keywords == ‘,’) window.ad_keywords = ‘homepage’;
if (f_key == ”) f_key = ‘homepage’;
if (l_key == ”) window.ad_keywords = f_key + ‘,categorypage’;

// ]]>//

Image

Figure 6

In this case, we are configuring a Hot standby server.

Image

Figure 7

You will need a shared secret password configured:

Image

Figure 8

Once you complete the failover scope wizard, the process will complete and the scope will be created on the target server. The process will show the status as successful for each of the steps in the wizard:

// // // 40) f_key = f_key.substring(0,40);
var l_key = loc.substr(loc.lastIndexOf(‘/’)).toLowerCase().replace(‘.html’,”).replace(/[^a-z0-9]/g,”);
if(l_key.length > 40) l_key = l_key.substring(0,40);
window.ad_keywords = f_key + ‘,’ + l_key;
if (window.ad_keywords == ‘,’) window.ad_keywords = ‘homepage’;
if (f_key == ”) f_key = ‘homepage’;
if (l_key == ”) window.ad_keywords = f_key + ‘,categorypage’;

// ]]>//

Image

Figure 9

With the scope configured for hot standby failover, we now have a fully operational scope that is getting asynchronous updates from the source server to keep track of current leases, reservations, options, and any other scope-specific configuration parameters. It’s just that easy!

For our own curiosity, it is good to check what the failover scope looks like. You simply open the target DHCP partner server, expand the scopes and right-click the failover scope in the DHCP Manager window. On the properties page there is a Failover tab which displays all of the information about the connection.

Image

Figure 10

Now that failover is configured, you will also see additional options available on the context menu for the protected scope. Using these options, you can remove the failover configuration, force replication of the scope, and force replication of the relationship information:

Image

Figure 11

Options outside the GUI

If you’ve been managing DHCP up to this point on Windows 2008 or earlier servers, you will know that most of the configuration is done in the GUI. In fact, much of the configuration was only available in the GUI in previous versions.

Under Windows Server 2012 and Windows Server 2012 R2, PowerShell has become vastly more powerful and important in the Microsoft ecosystem. New and enhanced DHCP Cmdlets are available by simply importing the DHCP Server module. This is done with a simple one-liner Import-Module DHCPServer as you can see below.

Once loaded, we can query all of the DHCP CmdLets using Get-Command *dhcp* as shown partially here:

Image

Figure 12

Better Features and Better Management

This is a great step forward with the Microsoft DHCP tool set which provides better failover options, better load balancing options, and more options for managing the environment. By extending the management into a scriptable format with PowerShell, this is an excellent tool for administrators to move towards more orchestration of their Microsoft DHCP environments.

Step-By-Step: Migrating The Active Directory Certificate Service From Windows Server 2003 to 2012 R2o

// // <![CDATA[
try { jQuery.telligent.evolution.site.configure({baseUrl:'/',monthNames:['Jan','Feb','Mar','Apr','May','Jun','Jul','Aug','Sep','Oct','Nov','Dec'],dayNames:['Sun','Mon','Tue','Wed','Thu','Fri','Sat'],authorizationCookieName:'AuthorizationCookie',defaultErrorMessage:'An error occurred. Please try again or contact your administrator.',defaultMultiErrorMessagePrefix:'The following errors occurred: ',silverlightFileUploadEnabled:true});
jQuery.extend($.fn.evolutionUserFileTextBox.defaults,{removeText:'Remove',selectText:'Select/Upload…',noFileText:'No File Selected'});
jQuery.telligent.evolution.navigationConfirmation.configure({message:'==============================rnUnless you save before leaving this page, you will lose any changes you have made.rn=============================='});
jQuery.telligent.evolution.validation.registerExtensions({passwordInvalidMessage:'Password contains invalid chars …',passwordRegex:'^.*$',emailInvalidMessage:'Your email address is invalid.',emailRegex:'^\w+([-+.]\w+)*@\w+([-.]\w+)*\.\w+([-.]\w+)*$',usernameInvalidMessage:'Your sign in name does not meet the requirements for this site.',usernameRegex:'^[a-zA-Z0-9_\- @\.]+$',emailsInvalidMessage:'One or more emails is invalid',urlInvalidMessage:'URL not in correct format',urlRegex:'^((http|https|mailto|mms):|/|#|~/)'});
jQuery.extend(jQuery.fn.evolutionLike.defaults,{likeText:'Like',unlikeText:'Unlike',modalTitleText:'People who like this',modalShowMoreText:'Show More',whoLikesOtherText:'{user_display_name} likes this’,whoLikesOtherTwoText:’{user_display_name} and 1 other like this’,whoLikesOtherMultipleText:’{user_display_name} and {count} others like this’,whoLikesAccessingText:’You like this’,whoLikesAccessingTwoText:’You and 1 other like this’,whoLikesAccessingMultipleText:’You and {count} others like this’});
jQuery.extend(jQuery.fn.evolutionInlineTagEditor.defaults,{editButtonText:’Edit tags’,selectTagsText:’Select tags’,saveTagsText:’Save’,cancelText:’Cancel’});
jQuery.extend(jQuery.fn.evolutionStarRating.defaults,{titles:[‘Terrible’,’Poor’,’Fair’,’Average’,’Good’,’Excellent’],ratingMessageFormat:’Average rating: {rating} out of {count} ratings.’});
jQuery.extend(jQuery.fn.evolutionModerate.defaults,{moderateLinkText:’moderate’,reportLinkText:’Flag as spam/abuse’,reportedLinkText:’Flagged as spam/abuse’,reportedNotificationMessageText:'{NAME}’s post has been flagged. Thank you for your feedback.’});
} catch(e) { };
// ]]>// // // // //
Step-By-Step: Migrating The Active Directory Certificate Service From Windows Server 2003 to 2012 R2 – Canadian IT Professionals – Site Home – TechNet Blogs

Step-By-Step: Migrating The Active Directory Certificate Service From Windows Server 2003 to 2012 R2

MVP Dishan Francis MVP Dishan Francis

Windows_Server_2003_Certificate_Migration

As you may be aware, support for both Windows Server 2003 and 2003 R2 is coming to end on July 14th 2015. With this in mind, IT professionals are in midst of planning migration. This guide will provide steps on migrating AD CS from Windows Server 2003 to Windows Server 2012 R2.

In this demonstration I am using following setup.

Server Name Operating System Server Roles
canitpro-casrv.canitpro.local Windows Server 2003 R2 Enterprise x86 AD CS ( Enterprise Certificate Authority )
CANITPRO-DC2K12.canitpro.local Windows Server 2012 R2 x64

Step 1: Backup Windows Server 2003 certificate authority database and its configuration

1. Log in to Windows 2003 Server as member of local administrator group

2. Go to Start > Administrative Tools > Certificate Authority

clip_image002

3. Right Click on Server Node > All Tasks > Backup CA

clip_image004

4. Then it will open the “Certification Authority Backup Wizard” and click “Next” to continue

clip_image006

5. In next window click on check boxes to select options as highlighted and click on “Browse” to provide the backup file path location where it will save the backup file. Then click on “Next” to continue

clip_image008

6. Then it will ask to provide a password to protect private key and CA certificate file. Once provided the password click on next to continue

clip_image010

7. In next window it will provide the confirmation and click on “Finish” to complete the process

Step 2: Backup CA Registry Settings

1. Click Start > Run and then type regedit and click “Ok”

clip_image012

2. Then expand the key in following path HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesCertSvc

3. Right click on “Configuration” key and click on “Export”

clip_image014

4. In next window select the path you need to save the backup file and provide a name for it. Then click on save to complete the backup

clip_image016

Now we have the backup of the CA and move these files to the new windows 2012 R2 server.

clip_image018

Step 3: Uninstall CA Service from Windows Server 2003

Now we have the backup files ready and before configure certificate services in new Windows Server 2012 r2, we can uninstall the CA services from windows 2003 server. To do that need to follow following steps.

1. Click on Start > Control Panel > Add or Remove Programs
clip_image020

2. Then click on “Add/Remove Windows Components” button
clip_image022

3. In next window remove the tick in “Certificate Services” and click on next to continue
clip_image024

4. Once its completed the process it will give the confirmation and click on “Finish”
clip_image026

With it we done with Windows Server 2003 CA services and next step to get the Windows Server 2012 CA services install and configure.

Step 4: Install Windows Server 2012 R2 Certificate Services

1. Log in to Windows Server 2012 as Domain Administrator or member of local administrator group

2. Go to Server Manager > Add roles and features
clip_image028

3. It will open up “Add roles and feature” wizard and click on next to continue
clip_image030

4. Then next window select “Role-based or Feature-based installation” and click next to continue
clip_image032

5. From the server selections keep the default selection and click on next to continue
clip_image034

6. In next window click on tick box to select “Active Directory Certificate Services” and it will pop up with window to acknowledge about required features need to be added. Click on add features to add them
clip_image036clip_image038

7. Then in features section will let it run with default. Click next to continue
clip_image040

8. In next window, it will give brief description about AD CS. Click next to continue
clip_image042

9. Then it will give option to select roles services. I have selected Certificate Authority and Certification Authority Web Enrollment. Click next to continue
clip_image044

10. Since Certification Authority Web Enrollment selected it will required IIS. So next window it will give brief description about IIS
clip_image046

11. Then in next window it gives option to add IIS role services. I will leave it default and click next to continue
clip_image048

12. Next window will give confirmation about service install and click on “Install” to start the installation process
clip_image050

13. Once installation completes you can close the wizard.

Step 5: Configure AD CS

In this step will look in to configuration and restoring the backup we created.

1. Log in to server as Enterprise Administrator

2. Go to Server Manager > AD CS
clip_image052

3. In right hand panel it will show message as following screenshot and click on “More”
clip_image054

4. It will open up window and click on “Configure Active Directory Certificate Service ……”
clip_image056

5. It will open role configuration wizard, it gives option to change the credential, in here I already log in as Enterprise administrator so I will leave the default and click next to continue
clip_image058

6. In next window it asking which service you like to configure. Select “Certification Authority”, “Certification Authority Web Enrollment” options and click next to continue
clip_image060

7. It will be Enterprise CA so in next window select the Enterprise CA as the setup type and click next to continue
clip_image062

8. Next window select “Root CA” as the CA type and click next to continue
clip_image064

9. The next option is very important on the configuration. If its new installation we will only need to create new private key. But since it’s a migration process we already made a backup of private key. So in here select the options as highlighted in screenshot. Then click on next to continue
clip_image066

10. In next window click on “Import” button
clip_image068

11. In here it will give option to select the key we backup during the backup process from windows 2003 server. Brows and select the key from the backup we made and provide the password we used for protection. Then click ok
clip_image070

12. Then it will import the key successfully and in window select the imported certificate and click next to continue
clip_image072

13. Next window we can define certificate database path. In here I will leave it default and click next to continue
clip_image074

14. Then in next window it will provide the configuration confirmation and click on configure to proceed with the process
clip_image076

15. Once its completed click on close to exit from the configuration wizard

Step 6: Restore CA Backup

Now it’s comes to the most important part of the process which is to restore the CA backup we made from Windows Server 2003.

1. Go To Server Manager > Tools > Certification Authority
clip_image078

2. Then right click on server node > All Tasks > Restore CA
clip_image080

3. Then it will ask if it’s okay to stop the certificate service in order to proceed. Click ok
clip_image082

4. It will open up Certification Authority Restore Wizard, click next to continue
clip_image084

5. In next window brows the folder where we stored backup and select it. Then also select the options as I did in below. Later click next to continue
clip_image086

6. Next window give option to enter the password we used to protect private key during the backup process. Once its enter click next to continue
clip_image088

7. In next window click “Finish” to complete the import process
clip_image090

8. Once its completed system will ask if it’s okay to start the certificate service again. Please proceed with it to bring service back online

Step 7: Restore Registry info

During the CA backup process we also backup registry key. It’s time to restore it. To do it open the folder which contains the backup reg key. Then double click on the key.

1. Then click yes to proceed with registry key restore
clip_image092

2. Once completed it will give confirmation about the restore
clip_image094

Step 8: Reissue Certificate Templates

We have done with the migration process and now it’s time to reissue the certificates. I had template setup in windows 2003 environment called “PC Certificate” which will issue the certificates to the domain computers. Let’s see how I can reissue them.

1. Open the Certification Authority Snap-in

2. Right click on Certificate Templates Folder > New > Certificate Template to Reissue
clip_image096

3. From the certificate templates list click on the appropriate certificate template and click ok
clip_image098

Step 9: Test the CA

In here I already had certificate template setup for the PC and set it to auto enroll. For the testing purposes I have setup windows 8 pc called demo1 and added it to canitpro.local domain. Once it’s loaded first time in server I open certification authority snap in and once I expanded the “Issued Certificate” section I can clearly see the new certificate it issued for the PC.

clip_image100

So this confirms the migration is successful.