EMC FAST: Whether to File and/or Block Tier

// // Source: Just Be Better… — EMC FAST: Whether to File and/or Block Tier// <![CDATA[
var customColor = '#ff6600';
var vimeoColor = '#ff6600';
var disableQuoteTitleFonts;
var disableHeaderNavFonts;
var slimAudioPlayer;

document.write(' .player, #page .video .video_embed, .photoset_narrow { display: none; } ‘);

// ]]>

Storage performance needs in today’s data center can change on a moment’s notice. Data that once needed the backing of a freight train today may only need the performance of a vespa tomorrow. Having the ability to react to the ever changing needs of one’s data in an automated fashion allows efficiencies never before seen in EMC’s midrange product line. Generally as data ages its importance lessens both from a business and usage perspective. Utilizing FAST allows EMC customers to place data on the appropriate storage tier based on application requirements and service levels. Choosing between cost (SATA/NL-SAS) and performance (EFD’s/SAS) is a thing of the past. Below are the what, when and why of EMC’s FAST. The intent is to help one make an informed decision based on the needs of their organization.

Block Tiering (What, When and Why)

The What: FAST for VNX/Clariion is an array based feature that utilizes Analyzer to move block based data (slices of LUNs). By capturing performance characteristics, it can intelligently make predictions on where that data will be best utilized. Data is moved at the sub LUN layer in 1G slices, eliminating the need and overhead with moving the full LUN. This could mean that portions of a LUN could exist on multiple disk types (FC, SATA , EFD) Migrations are seamless to the host and occur bidirectionally based on performance needs, ie. FC to SATA, FC to EFD, SATA to FC, SAS to NL-SAS, etc. FAST is utilized at the Storage Pool layer and not available within Traditional RAID Groups. To utilize FAST v2 (which is sub LUN tiering ) you must be at FLARE 30 or above (, and have both Analyzer and FAST enabler installed on the array. Existing LUNs/Data can migrate seamlessly and non-disruptively into storage pools using the VNX/Clariion LUN migration feature. Additionally FAST operates with other Array based features such as Snapview, MirrorView, SAN Copy, RecoverPoint, etc, without issue. All FAST operations and scheduling is configurable through Unisphere.

The When: Automated tiering is a scheduled batch event and does not happen dynamically.

The Why: To better align your application service levels with the best storage type. Ease of management, as a requirement for FAST are storage pools. Storage pools allow for concise management and eased growth opportunities from one location. Individual RG and Meta LUNs management is not needed to obtain high end services levels with the use of SP’s and FAST. The idea going forth is to minimize disk purchasing requirements by moving hot and cold data to and fro disk types that meet specific service levels for that data. If data is accessed frequently then it makes sense that it lives on either EFD (enterprise FLASH drives) or FC/SAS. If data is not accessed frequently then it ideally should live on SATA/NL-SAS. By utilizing FAST in your environment, you are utilizing your Array in the most efficient manner while minimizing cap-ex costs.

File Tiering (What, When and Why)

The What: FAST for VNX File/Celerra utilizes the Cloud Tiering Appliance (or what was FMA, previously known as Rainfinity). The CTA utilizes a policy engine that allows movement of infrequently used files across different storage tiers based on last access times, modify times, size, filename, etc. As data is moved, the user perception is that the files still exist on primary storage. File retrieval (or recall) is initiated simply by clicking on the file, the file is then copied back to its original location. The appliance itself is available as a virtual appliance that can be imported into your existing VMware infrastructure via vCenter, or as a physical appliance (HW plus the software). Unlike FAST for VNX/CLARiiON, FAST for file allows you to tier across arrays (Celerra <-> VNX, Isilon or third party arrays) or cloud service providers (Atmos namely, other SP’s coming). The introduction of CTA to your environment is non-disruptive. All operations for CTA are configurable through the CTA GUI. In summary, CTA can be used as a Tiering engine, an archiving engine or a migration engine based on the requirements of your business. From an archiving perspective, CTA can utilize both Data Domain and Centera targets for long term enforced file level retention. As a migration engine, CTA can be utilized for permanent file moves from one array to another during technology refreshes or platform conversions. Note: CTA has no knowledge of the storage type, it simply moves files from one tier to another based on pre- defined criteria.

The When: Automated tiering is designed to running at scheduled intervals (in batches) and does not happen dynamically or continually I should say.

The Why: Unstructured data, data that exists outside of pre-defined data model such as SQL, is growing at an alarming rate. Think about how many word docs, excel spreadsheets, pictures, text files exist in your current NAS or general file-served environments. Out of that number what percentage hasn’t been touched since its initial creation? In that context, a fair assessment would be 50% of that data. A more accurate assessment would probably be 80% of your data. Archiving and Tiering via CTA simple allows for more efficient use of your high end and low end storage. If 80% of your data is not accessed or accessed infrequently it has no business being on fast spinning disk (FC or SAS). Ultimately this allows you to curb your future spending on pricey high end disk and focus more purchasing capacity for where your data should sit, on low end storage.


As brought to my attention on the twitters (Thanks->@PBradz and @veverything), there is of course another option. Historically, data LUNs as used by the data movers for file specific data (CIFS, NFS) has only been supported on traditional RAID Group LUNs. With the introduction of the VNX, support has been extended to pool LUNs. This implies that you can utilize FAST block tiering for the data that encompasses those LUNs. A couple of things when designing and utilizing in this manner (more info here)…

  • The entire pool should be used by file only
  • Thick LUNs only within the pool
  • Same tiering policy for each pool LUN
  • Utilize compression and dedupe on the file side. Stay clear of block thin provisioning and compression.

There are of course numerous other recommendations that should be noted if you decide to go this route. Personally, its taken me a while to warm up to storage pools. Like any new technology it needs to gain my trust before I go all in on recommending it. Inherent bugs and inefficiencies early on have caused me to be somewhat cautious. Assuming you walk the line on how your pools are configured, this is a very viable means to file tier (so to speak) with the purchase of FAST block only. That being said there is still benefit to using the CTA for long term archiving primarily off array, as currently FAST Block is array bound only. Define the requirements up front so you’re not surprised on the backend as to what the technology can and can not do. If the partner you’re working with is worth their salt you’ll know all applicable options prior to that PO being cut…


EMC VNX – LUN migration using VNX Snapshot | Storage Freak

// // <![CDATA[
window.dynamicgoogletags={config:[]};dynamicgoogletags.config=["ca-pub-4270312037464181",[[[["ARTICLE",0,,[],2],["10px","24px",0],0,,"1054317605",0],[["ARTICLE",0,,[],-1],["10px","24px",0],3,,"4221942009",0],[["ASIDE",,"categories-4",[]],["10px","21px",0],3,[0],"2989750802",0]]],[[[[9653709,[[0,199]]],[9653710,[[500,699]]]],[[["BODY",0,,[]],["10px","10px",1],1,[2],,0],[["NAV",0,,[]],["10px","10px",1],3,[1],,0],[["DIV",,"content",[]],["10px","10px",1],0,[1],,0],[["DIV",,"primary",[]],["10px","10px",1],0,[1],,0],[["DIV",,"comments",[]],["10px","22.5px",1],0,[3],,0],[["DIV",,"respond",[]],["10px","10px",1],3,[1],,0],[["ASIDE",,"search-4",[]],["10px","10px",1],1,[1],,0],[["ASIDE",,"search-4",[]],["10px","10px",1],2,[1],,0],[["ASIDE",,"search-4",[]],["10px","21px",1],3,[1],,0],[["ASIDE",,"categories-4",[]],["10px","14px",1],1,[1],,0],[["ASIDE",,"categories-4",[]],["10px","10px",1],2,[1],,0],[["ASIDE",,"categories-4",[]],["10px","21px",1],3,[1],,0],[["ASIDE",,"recent-posts-4",[]],["10px","10px",1],1,[1],,0],[["ASIDE",,"recent-posts-4",[]],["10px","10px",1],2,[1],,0],[["ASIDE",,"recent-posts-4",[]],["10px","21px",1],3,[1],,0],[["ASIDE",,"text-3",[]],["10px","10px",1],1,[1],,0],[["ASIDE",,"text-3",[]],["10px","10px",1],2,[1],,0],[["ASIDE",,"text-3",[]],["10px","21px",1],3,[1],,0],[["ASIDE",,"tag_cloud-3",[]],["10px","14px",1],1,[1],,0],[["ASIDE",,"tag_cloud-3",[]],["10px","10px",1],2,[1],,0],[["ASIDE",,"tag_cloud-3",[]],["10px","21px",1],3,[1],,0],[["ASIDE",,"archives-4",[]],["10px","14px",1],1,[1],,0],[["ASIDE",,"archives-4",[]],["10px","10px",1],2,[1],,0],[["DIV",,"secondary",[]],["10px","10px",1],3,[1],,0],[["DIV",,"content",[]],["10px","48px",1],3,[1],,0],[["BODY",0,,[]],["10px","10px",1],2,[1],,0],[["ARTICLE",0,,[],0],["10px","10px",0],0,[0],,0],[["ARTICLE",0,,[],1],["10px","18px",0],0,[0],,0],[["ARTICLE",0,,[],2],["10px","18px",0],0,[0],,0],[["ARTICLE",0,,[],-1],["10px","24px",0],3,[0],,0]],["8139378002","9616111209","2092844404","5161585209","6638318409"],["ARTICLE",,,[]]]],"WordPressSinglePost","1612720807",,0.001,0.04];(function(){var aa=function(a){var b=typeof a;if("object"==b)if(a){if(a instanceof Array)return"array";if(a instanceof Object)return b;var c=Object.prototype.toString.call(a);if("[object Window]"==c)return"object";if("[object Array]"==c||"number"==typeof a.length&&"undefined"!=typeof a.splice&&"undefined"!=typeof a.propertyIsEnumerable&&!a.propertyIsEnumerable("splice"))return"array";if("[object Function]"==c||"undefined"!=typeof a.call&&"undefined"!=typeof a.propertyIsEnumerable&&!a.propertyIsEnumerable("call"))return"function"}else return"null";
else if("function"==b&&"undefined"==typeof a.call)return"object";return b},h=function(a){return"number"==typeof a},p=function(a,b){function c(){}c.prototype=b.prototype;a.G=b.prototype;a.prototype=new c;a.prototype.constructor=a;a.D=function(a,c,g){for(var k=Array(arguments.length-2),f=2;f<arguments.length;f++)k[f-2]=arguments[f];return b.prototype[c].apply(a,k)}};var r=function(a){return-1!=q.indexOf(a)};var t=function(){},u=function(a,b,c){a.j={};b||(b=[]);a.F=void 0;a.o=-1;a.c=b;n:{if(a.c.length){b=a.c.length-1;var d=a.c[b];if(d&&"object"==typeof d&&"number"!=typeof d.length){a.t=b-a.o;a.q=d;break n}}a.t=Number.MAX_VALUE}if(c)for(b=0;b<c.length;b++)d=c[b],d<a.t?(d+=a.o,a.c[d]=a.c[d]||[]):a.q[d]=a.q[d]||[]},v=function(a,b){return b<a.t?a.c[b+a.o]:a.q[b]},w=function(a,b,c){b<a.t?a.c[b+a.o]=c:a.q[b]=c},x=function(a,b,c){if(!a.j[c]){var d=v(a,c);d&&(a.j[c]=new b(d))}return a.j[c]},y=function(a,b,c){if(!a.j[c]){for(var d=
v(a,c),e=[],g=0;g<d.length;g++)e[g]=new b(d[g]);a.j[c]=e}return a.j[c]};t.prototype.toString=function(){return this.c.toString()};var A=function(a){return new a.constructor(z(a.c))},z=function(a){var b;if("array"==aa(a)){for(var c=Array(a.length),d=0;d<a.length;d++)null!=(b=a[d])&&(c[d]="object"==typeof b?z(b):b);return c}c={};for(d in a)null!=(b=a[d])&&(c[d]="object"==typeof b?z(b):b);return c};var B=function(a){u(this,a,ba)};p(B,t);var ba=[4];B.prototype.f=function(){return A(this)};var C=function(a){u(this,a,null)};p(C,t);C.prototype.f=function(){return A(this)};var D=function(a){u(this,a,null)};p(D,t);D.prototype.f=function(){return A(this)};D.prototype.l=function(){return v(this,1)};D.prototype.C=function(a){w(this,1,a)};var ea=function(a,b){var c=b||document,d=null,e=v(a,3);if(e)if(d=c.getElementById(e))d=[d];else return[];if(e=v(a,1))if(d){for(var g=[],k=0;k<d.length;++k)d[k].tagName&&d[k].tagName.toUpperCase()==e.toUpperCase()&&g.push(d[k]);d=g}else{d=c.getElementsByTagName(e);e=[];for(g=0;g<d.length;++g)e.push(d[g]);d=e}if((e=v(a,4))&&0<e.length)if(d)for(c=d,d=[],g=0;gc)d=[d[c]];else return[];c=v(a,5);
if(h(c)&&d){e=[];for(g=0;gc&&(c+=k.length),0<=c&&c<k.length&&e.push(k[c]);d=e}c=v(a,6);if(void 0!==c&&d)switch(c){case 0:return d.slice(1);case 1:return d.slice(0,d.length-1);case 2:return d.slice(1,d.length-1)}return d?d:[]},da=function(a){var b=[];a=a.getElementsByTagName("p");for(var c=0;c<a.length;++c)100<=fa(a[c])&&b.push(a[c]);return b},fa=function(a){if(3==a.nodeType)return a.length;if(1!=a.nodeType||"SCRIPT"==a.tagName)return 0;for(var b=0,c=0;c<a.childNodes.length;++c)b+=
fa(a.childNodes[c]);return b},ca=function(a,b){if(b.getElementsByClassName){for(var c=b.getElementsByClassName(a.join(" ")),d=[],e=0;e<c.length;++e)d.push(c[e]);return d}c=[];E(b,a)&&c.push(b);for(e=0;e<b.childNodes.length;++e)1==b.childNodes[e].nodeType&&(c=c.concat(ca(a,b.childNodes[e])));return c},E=function(a,b){for(var c=a.className?a.className.split(/s+/):[],d={},e=0;e<c.length;++e)d[c[e]]=!0;for(e=0;e<b.length;++e)if(!d[b[e]])return!1;return!0};var q;n:{var ga=this.navigator;if(ga){var ha=ga.userAgent;if(ha){q=ha;break n}}q=""};var G=function(a){u(this,a,null)};p(G,t);G.prototype.f=function(){return A(this)};G.prototype.l=function(){return v(this,3)};G.prototype.C=function(a){w(this,3,a)};var H=function(a){u(this,a,ia)};p(H,t);var ia=[1];H.prototype.f=function(){return A(this)};H.prototype.r=function(){return y(this,G,1)};var I=function(a){u(this,a,ja)};p(I,t);var ja=[2];I.prototype.f=function(){return A(this)};var J=function(a){u(this,a,null)};p(J,t);J.prototype.f=function(){return A(this)};var K=function(a){u(this,a,ka)};
p(K,t);var ka=[1,2,3];K.prototype.f=function(){return A(this)};K.prototype.r=function(){return y(this,G,2)};var L=function(a){u(this,a,la)};p(L,t);var la=[3];L.prototype.f=function(){return A(this)};var ma=function(a){var b=window;b.google_image_requests||(b.google_image_requests=[]);var c=b.document.createElement("img");c.src=a;b.google_image_requests.push(c)};var oa=function(){var a=window.dynamicgoogletags.s;if(null!=a){var b=a.g,a=a.d;if(null!=b&&v(b,7)&&!(v(b,7)<Math.random())){var c="https://pagead2.googlesyndication.com/pagead/gen_204?&quot;,d=function(a,b){b&&(b="function"==typeof encodeURIComponent?encodeURIComponent(b):escape(b),c+="&"+a+"="+b)},c=c+"id=pso_failure";d("wpc",v(b,1));d("sv",na());d("tn",v(b,4));d("eid",a);d("w",window.innerWidth);d("h",window.innerHeight);d("m","");ma(c)}}},na=function(){for(var a=document.getElementsByTagName("script"),
b=0;bMath.random())){var b=Math.random();if(b<(v(a.g,8)||0)){try{var c=new Uint16Array(1);window.crypto.getRandomValues(c);b=c[0]/65536}catch(d){b=Math.random()}M=pa[Math.floor(b*pa.length)];break n}}M=null}return M},qa=function(a){var b=window.document.location.hash;if(!b)return null;
b=b.split("#");if(2!=b.length||0==b[1].length)return null;b=b[1];if(a){b=b.split("=");if(2!=b.length||b[0]!=a||!b[1])return null;b=b[1]}a=Number(b);return isNaN(a)?null:a};var ta=function(a){return!!a.nextSibling||!!a.parentNode&&ta(a.parentNode)},ua=function(a,b){var c=x(b,B,1);if(!c)return null;c=ea(c,a);return 0<c.length?c[0]:null},xa=function(a,b,c,d,e,g,k){var f=a.document,l=ua(f,b);if(!l||2==b.l()&&!g&&!ta(l))return!1;if(k)return!0;g=v(b,5);if(!g)return!1;k=v(b,6);var m=x(b,C,2),n=f.createElement("div");n.className="googlepublisherpluginad";n.style.textAlign="center";n.style.width="100%";n.style.height="auto";n.style.clear=m&&v(m,3)?"both":"none";f=f.createElement("ins");
f.className="adsbygoogle";f.setAttribute("data-ad-client",c);f.setAttribute("data-ad-slot",g);f.setAttribute("data-ad-format",va(k));f.setAttribute("data-tag-origin","pso");f.style.display="block";f.style.margin="auto";f.style.backgroundColor="transparent";m&&(v(m,1)&&(f.style.marginTop=v(m,1)||""),v(m,2)&&(f.style.marginBottom=v(m,2)||""));n.appendChild(f);wa(b.l(),l,n);b={};c=[];if(d)for(l=0;lc.length)return!1;var k=ea(g);if(!this.u){for(var f=e&&-1<Aa.indexOf(e)?Ca:Ba,l=0;lthis.e.length&&!Da(this,f[l])){var m=f[l],n=c[this.e.length],F=d,S=[b],T=e,U=a,ra=Q(g,m);ra&&O(this,ra,n,F,S,T,U)&&this.e.push(m)}if(3>this.e.length&&a){l=[];0<k.length&&l.push({a:0,position:0});for(f=0;f<k.length;f++)l.push({a:f,
position:3});for(f=0;f<k.length;f++)for(m=da(k[f]),n=0;n<m.length-1;n++)l.push({a:f,i:n,position:3});k=[];for(f=0;fn):n=null;n&&k.push(m)}for(l=0;lthis.e.length;l++)f=k[l],m=c[this.e.length],n=d,F=[b],S=e,T=a,(U=Q(g,f))&&O(this,U,m,n,F,S,T)&&this.e.push(f)}}return 0<this.e.length};
var Da=function(a,b){for(var c=0;c
v(c,3).length)return!1;var g=c.r();if(a||this.h){if(this.v)return 0<Ea(this);for(var k=[],f=0;f<g.length;++f)m=g[f],ya(this.b.document,m),Fa(this,m)&&k.push(m);k.sort(function(a,b){return v(a,8)-v(b,8)});g=this.h?1:0;m=-800;for(f=0;fg;++f){var l=k[f];800<=v(l,8)-m&&Ga(this,v(l,8))&&O(this,l,v(c,3)[g],d,[b],e,a)&&(w(l,7,!0),g++,Ha(this,v(l,
8)),m=v(l,8))}this.v=!0;return 0<g}for(var f=0;fa.k[c]},Ha=function(a,b){var c=Ia(a,b);++a.k[c]},Ia=function(a,b){for(var c=Math.floor(b/2400),d=a.k.length;d<=c;++d)a.k.push(0);return c},Ea=function(a){for(var b=0,c=0;c
c&&!v(b,7)};var V=function(a,b){N.call(this,a,b)};p(V,N);V.prototype.apply=function(a){var b=this.b.dynamicgoogletags.s,c=b.g;if(!c)return!1;var b=b.d?b.d.toString():void 0,d=v(c,4)||void 0,e=v(c,1),g=x(c,H,2);if(!e||!g)return!1;for(var c=!0,g=g.r(),k=this.B();kv(c,3).length)return!1;for(var g=c.r(),k=0;k<g.length;k++){var f=g[k],l=x(f,D,4);l&&(l=l.l(),this.h||2!=l||(this.h=O(this,f,v(c,3)[0],d,[b],e,a)),this.p||3!=l||(this.p=O(this,f,v(c,3)[1],d,[b],e,a)))}return this.h&&this.p};var Y={9653709:V,9653710:X,9653711:R,9653712:P,9653715:V,9653716:W};var Z=null,Ja=function(){h(Z)&&window.clearInterval&&(window.clearInterval(Z),Z=null)},La=function(){Ja();Ka(!0)},Ma=function(){Ja();window.dynamicgoogletags.s&&window.dynamicgoogletags.s.n?window.dynamicgoogletags.s.n.apply(!0)||oa():oa()},Ka=function(a){window.dynamicgoogletags.s&&window.dynamicgoogletags.s.n&&window.dynamicgoogletags.s.n.apply(a)};var Na=function(){var a=window.dynamicgoogletags.config;a&&(window.dynamicgoogletags.s.g=new L(a))},Pa=function(){var a=null,b=!1,c=Oa();c&&(a=parseInt(c.getItem("PSO_EXP0"),10));if(null==a||isNaN(a))a=Math.floor(1E3*Math.random()),c&&(c.setItem("PSO_EXP0",a.toString()),b=!0);window.dynamicgoogletags.s.w=a;window.dynamicgoogletags.s.A=b},Oa=function(){var a=window;try{return a.localStorage||null}catch(b){return null}},Ra=function(){var a=Qa();if(!a)return null;a=v(a,1);null!=Oa()&&(window.dynamicgoogletags.s.d=
a);return a},Qa=function(){var a=window.dynamicgoogletags.s.w,b=window.dynamicgoogletags.s.g;if(!h(a)||!b)return null;for(var b=y(b,K,3)[0],b=y(b,I,1),c=0;c<b.length;c++)for(var d=b[c],e=y(d,J,2),g=0;g=f&&a

// One of the VNX Snapshot use cases is to promote a snapshot to be a standard LUn. To do that, the snapshot must be attached, and then the SMP (Snapshot Mount Point) can be migrated to another LUN.  IF a migrating SMP has any snapshots associated with it (for example Cascading Snapshots), all of them will be destroyed.

mount SMP to another host

Let’s discuss the case:

  • Host1 is a production server running an application with PrimaryLUN1 provisioned.
  • Host2 is a development server. This server must try a new version of the application on a copy of the production data
  • The Administrator performs the following actions
  1. Takes a snapshot ‘Snap2′ from the production PrimaryLUN1
  2. Creates SMP for a snapshot from PrimaryLUN1
  3. Provisiones SMP to Host2 (for example adds SMP to the storage group for Host2
  4. Attaches Snap2 snapshot to SMP
  5. Runs SCSI rescan on Host2
  6. Creates a local drive on Host2
  • At some point, SMP is snapped, and Snap2.1 is created

After some tiem of running development code on SMP, it is decided to promote it to an independent LUN. To do that, the Administrator performs the following actions:


vnx | The SAN Guy 

After speaking to our local rep and attending many different classes at the most recent EMC World in Vegas, I came away with some good information and a very logical best practice for implementing multi-tiered FAST VP storage pools.

First and foremost, you have to use Flash.  High RPM Fiber Channel drives are neighter capactiy efficient or performance efficient, the highest IO data needs to be hosted on Flash drives.  The most effective split of drives in a storage pool is 5% Flash, 20% Fiber Channel, and 75% SATA.

Using this example, if you have an existing SAN with 167 15,000 RPM 600GB Fiber Channel Drives, you would replace them with 97 drives in the 5/20/75 blend to get the same capacity with much improved performance:

  • 25 200GB Flash Drives
  • 34 15K 600GB Fiber Channel Drives
  • 38 2TB SATA Drives

The ideal scenario is to implement FAST Cache along with FAST VP.  FAST Cache continously ensures that the hottest data is serverd from Flash Drives.  With FAST Cache, up to 80% of your data IO will come from Cache (Legacy DRAM Cache served up only about 20%).

It can be a hard pill to swallow when you see how much the Flash drives cost, but their cost is negated by increased disk utilization and reduction in the number of total drives and DAEs that you need to buy.   With all FC drives, disk utilization is sacrificed to get the needed performance – very little of the capacity is used, you just buy tons of disks in order to get more spindles in the raid groups for better performance.  Flash drives can achieve much higher utilization, reducing the effective cost.

After implementing this at my company I’ve seen dramatic performance improvements.  It’s an effective strategy that really works in the real world.

In addition to this, I’ve also been implementing storage pools in pairs of two, each sized identically.  The first pool is designated only for SP A, the second is for SPB.  When I get a request for data storage, in this case let’s say for 1 TB, I will create a 500GB LUN in the first pool on SP A, and a 500GB LUN in the second pool on SP B.  When the disks are presented to the host server, the server administrator will then stripe the data across the two LUNs.  Using this method, I can better balance the load across the storage processors on the back end.


Undocumented Celerra / VNX File commands | The SAN Guy//

The .server_config command is undocumented from EMC, I assume they don’t want customers messing with it. Use these commands at your own risk. 🙂

Below is a list of some of those undocumented commands, most are meant for viewing performance stats. I’ve had EMC support use the fcp command during a support call in the past.   When using the command for fcp stats,  I believe you need to run the ‘reset’ command first as it enables the collection of statistics.

There are likely other parameters that can be used with .server_config but I haven’t discovered them yet.

TCP Stats:

To view TCP info:
.server_config server_x -v “printstats tcpstat”
.server_config server_x -v “printstats tcpstat full”
.server_config server_x -v “printstats tcpstat reset”

Sample Output (truncated):
TCP stats :
connections initiated 8698
connections accepted 1039308
connections established 1047987
connections dropped 524
embryonic connections dropped 3629
conn. closed (includes drops) 1051582
segs where we tried to get rtt 8759756
times we succeeded 11650825
delayed acks sent 537525
conn. dropped in rxmt timeout 0
retransmit timeouts 823

SCSI Stats:

To view SCSI IO info:
.server_config server_x -v “printstats scsi”
.server_config server_x -v “printstats scsi reset”

Sample Output:
This output needs to be in a fixed width font to view properly.  I can’t seem to adjust the font, so I’ve attempted to add spaces to align it.
Ctlr: IO-pending Max-IO IO-total Idle(ms) Busy(ms) Busy(%)
0:      0         53    44925729       122348758     19159954   13%
1:      0                                           1 1 141508682       0          0%
2:      0                                           1 1 141508682       0          0%
3:      0                                           1 1 141508682       0          0%
4:      0                                           1 1 141508682       0          0%

File Stats:

.server_config server_x -v “printstats filewrite”
.server_config server_x -v “printstats filewrite full”
.server_config server_x -v “printstats filewrite reset”

Sample output (Full Output):
13108 writes of 1 blocks in 52105250 usec, ave 3975 usec
26 writes of 2 blocks in 256359 usec, ave 9859 usec
6 writes of 3 blocks in 18954 usec, ave 3159 usec
2 writes of 4 blocks in 2800 usec, ave 1400 usec
4 writes of 13 blocks in 6284 usec, ave 1571 usec
4 writes of 18 blocks in 7839 usec, ave 1959 usec
total 13310 blocks in 52397489 usec, ave 3936 usec

FCP Stats:

To view FCP stats, useful for checking SP balance:
.server_config server_x -v “printstats fcp”
.server_config server_x -v “printstats fcp full”
.server_config server_x -v “printstats fcp reset”

Sample Output (Truncated):
This output needs to be in a fixed width font to view properly.  I can’t seem to adjust the font, so I’ve attempted to add spaces to align it.
Total I/O Cmds: +0%——25%——-50%——-75%—–100%+ Total 0
FCP HBA 0 |                                                                                            | 0%  0
FCP HBA 1 |                                                                                            | 0%  0
FCP HBA 2 |                                                                                            | 0%  0
FCP HBA 3 |                                                                                            | 0%  0
# Read Cmds: +0%——25%——-50%——-75%—–100%+ Total 0
FCP HBA 0 |                                                                                            | 0% 0
FCP HBA 1 |                                                                                            | 0% 0
FCP HBA 2 |                                                                                            | 0% 0
FCP HBA 3 |  XXXXXXXXXXX                                                          | 25% 0


‘fcp’ options are:       bind …, flags, locate, nsshow, portreset=n, rediscover=n
rescan, reset, show, status=n, topology, version

‘fcp bind’ options are:  clear=n, read, rebind, restore=n, show
showbackup=n, write


Commands for ‘fcp’ operations:
fcp bind <cmd> ……… Further fibre channel binding commands
fcp flags ………….. Show online flags info
fcp locate …………. Show ScsiBus and port info
fcp nsshow …………. Show nameserver info
fcp portreset=n …….. Reset fibre port n
fcp rediscover=n ……. Force fabric discovery process on port n
Bounces the link, but does not reset the port
fcp rescan …………. Force a rescan of all LUNS
fcp reset ………….. Reset all fibre ports
fcp show …………… Show fibre info
fcp status=n ……….. Show link status for port n
fcp status=n clear ….. Clear link status for port n and then Show
fcp topology ……….. Show fabric topology info
fcp version ………… Show firmware, driver and BIOS version

Commands for ‘fcp bind’ operations:
fcp bind clear=n ……. Clear the binding table in slot n
fcp bind read ………. Read the binding table
fcp bind rebind …….. Force the binding thread to run
fcp bind restore=n ….. Restore the binding table in slot n
fcp bind show ………. Show binding table info
fcp bind showbackup=n .. Show Backup binding table info in slot n
fcp bind write ……… Write the binding table

NDMP Stats:

To Check NDMP Status:
.server_config server_x -v “printstats vbb show”

CIFS Stats:

This will output a CIFS report, including all servers, DC’s, IP’s, interfaces, Mac addresses, and more.

.server_config server_x -v “cifs”

Sample Output:

1327007227: SMB: 6: 256 Cifs threads started
1327007227: SMB: 6: Security mode = NT
1327007227: SMB: 6: Max protocol = SMB2
1327007227: SMB: 6: I18N mode = UNICODE
1327007227: SMB: 6: Home Directory Shares DISABLED
1327007227: SMB: 6: Usermapper auto broadcast enabled
1327007227: SMB: 6:
1327007227: SMB: 6: Usermapper[0] = [] state:active (auto discovered)
1327007227: SMB: 6:
1327007227: SMB: 6: Default WINS servers =
1327007227: SMB: 6: Enabled interfaces: (All interfaces are enabled)
1327007227: SMB: 6:
1327007227: SMB: 6: Disabled interfaces: (No interface disabled)
1327007227: SMB: 6:
1327007227: SMB: 6: Unused Interface(s):
1327007227: SMB: 6:  if=172-168-1-84 l= b= mac=0:60:48:1c:46:96
1327007227: SMB: 6:  if=172-168-1-82 l= b= mac=0:60:48:1c:10:5d
1327007227: SMB: 6:  if=172-168-1-81 l= b= mac=0:60:48:1c:46:97
1327007227: SMB: 6:
1327007227: SMB: 6:
1327007227: SMB: 6:  SID=S-1-5-15-7c531fd3-6b6745cb-ff77ddb-ffffffff
1327007227: SMB: 6:  DC=DCAD01( ref=2 time=0 ms
1327007227: SMB: 6:  DC=DCAD02( ref=2 time=0 ms
1327007227: SMB: 6:  DC=DCAD03( ref=2 time=0 ms
1327007227: SMB: 6:  DC=DCAD04( ref=2 time=0 ms
1327007227: SMB: 6: >DC=SERVERDCAD01( ref=334 time=1 ms (Closest Site)
1327007227: SMB: 6: >DC=SERVERDCAD02( ref=273 time=1 ms (Closest Site)
1327007227: SMB: 6:
1327007227: UFS: 7: inc ino blk cache count: nInoAllocs 361: inoBlk 0x0219f2a308
1327007227: SMB: 6:  Full computer name=SERVERFILESEMC.DOMAIN_NAME.net realm=DOMAIN_NAME.NET
1327007227: SMB: 6:  Comment=’EMC-SNAS:T6.0.41.3′
1327007227: SMB: 6:  if=172-168-1-161 l= b= mac=0:60:48:1c:46:9c
1327007227: SMB: 6:   FQDN=SERVERFILESEMC.DOMAIN_NAME.net (Updated to DNS)
1327007227: SMB: 6:  Password change interval: 0 minutes
1327007227: SMB: 6:  Last password change: Fri Jan  7 19:25:30 2011 GMT
1327007227: SMB: 6:  Password versions: 2, 2
1327007227: SMB: 6:
1327007227: SMB: 6: CIFS Server SERVERBKUPEMC[DOMAIN_NAME] RC=2 (local users supported)
1327007227: SMB: 6:  Full computer name=SERVERbkupEMC.DOMAIN_NAME.net realm=DOMAIN_NAME.NET
1327007227: SMB: 6:  Comment=’EMC-SNAS:T6.0.41.3′
1327007227: SMB: 6:  if=172-168-1-90 l= b= mac=0:60:48:1c:10:54
1327007227: SMB: 6:   FQDN=SERVERbkupEMC.DOMAIN_NAME.net (Updated to DNS)
1327007227: SMB: 6:  Password change interval: 0 minutes
1327007227: SMB: 6:  Last password change: Thu Sep 30 16:23:50 2010 GMT
1327007227: SMB: 6:  Password versions: 2
1327007227: SMB: 6:

Domain Controller Commands:

These commands are useful for troubleshooting a windows domain controller connection issue on the control station.  Use these commands along with checking the normal server log (server_log server_2) to troubleshoot that type of problem.

To view the current domain controllers visible on the data mover:

.server_config server_2 -v “pdc dump”

Sample Output (Truncated):

1327006571: SMB: 6: Dump DC for dom='<domain_name>’ OrdNum=0
1327006571: SMB: 6: Domain=<domain_name> Next trusted domains update in 476 seconds1327006571: SMB: 6:  oldestDC:DomCnt=1,179531 Time=Sat Oct 15 15:32:14 2011
1327006571: SMB: 6:  Trusted domain info from DC='<Windows_DC_Servername>’ (423 seconds ago)
1327006571: SMB: 6:   Trusted domain:<domain_name>.net [<Domain_Name>]
1327006571: SMB: 6:    Flags=0x20 Ix=0 Type=0x2 Attr=0x0
1327006571: SMB: 6:    SID=S-1-5-15-d1d612b1-87382668-9ba5ebc0
1327006571: SMB: 6:    DC=’-‘
1327006571: SMB: 6:    Status Flags=0x0 DCStatus=0x547,1355
1327006571: SMB: 6:   Trusted domain: <Domain_Name>
1327006571: SMB: 6:    Flags=0x22 Ix=0 Type=0x1 Attr=0x1000000
1327006571: SMB: 6:    SID=S-1-5-15-76854ac0-4c527104-321d5138
1327006571: SMB: 6:    DC=’\<Windows_DC_Servername>’
1327006571: SMB: 6:    Status Flags=0x0 DCStatus=0x0,0
1327006571: SMB: 6:   Trusted domain:<domain_name>.net [<domain_name>]
1327006571: SMB: 6:    Flags=0x20 Ix=0 Type=0x2 Attr=0x0
1327006571: SMB: 6:    SID=S-1-5-15-88d60754-f3ed4f9d-b3f2cbc4
1327006571: SMB: 6:    DC=’-‘
1327006571: SMB: 6:    Status Flags=0x0 DCStatus=0x547,1355
DC=DC0x0067a82c18 <Windows_DC_Servername>[<domain_name>]( ref=2 time(getdc187)=0 ms LastUpdt=Thu Jan 19 20:45:14 2012
    Pid=1000 Tid=0000 Uid=0000
    Cnx=UNSUCCESSFUL,DC state Unknown
    logon=Unknown 0 SecureChannel(s):
    Capa=0x0 Nego=0x0000000000,L=0 Chal=0x0000000000,L=0,W2kFlags=0x0
    refCount=2 newElectedDC=0x0000000000 forceInvalid=0
    Discovered from: WINS

To enable or disable a domain controller on the data mover:

.server_config server_2 -v “pdc enable=<ip_address>”  Enable a domain controller

.server_config server_2 -v “pdc disable=<ip_address>”  Disable a domain controller


 .server_config server_2 -v “meminfo”

Sample Output (truncated):

3552907011 calls to malloc, 3540029263 to free, 61954 to realloc
Size     In Use       Free      Total nallocs nfrees
16       3738        870       4608   161720370   161716632
32      18039      17289      35328   1698256206   1698238167
64       6128       3088       9216   559872733   559866605
128       6438      42138      48576   255263288   255256850
256       8682      19510      28192   286944797   286936115
512       1507       2221       3728   357926514   357925007
1024       2947       9813      12760   101064888   101061941
2048       1086        198       1284    5063873    5062787
4096         26        138        164    4854969    4854943
8192        820         11        831   19562870   19562050
16384         23         10         33       5676       5653
32768          6          1          7        101         95
65536         12          0         12         12          0
524288          1          0          1          1          0
Total Used     Total Free    Total Used + Free
all sizes   18797440   23596160   42393600


.server_config server_2 -v “help memowners”

memowners [dump | showmap | set … ]

memowners [dump] – prints memory owner description table
memowners showmap – prints a memory usage map
memowners memfrag [chunksize=#] – counts free chunks of given size
memowners set priority=# tag=# – changes dump priority for a given tag
memowners set priority=# label=’string’ – changes dump priority for a given label
The priority value can be set to 0 (lowest) to 7 (highest).

Sample Output (truncated):

1408979513: KERNEL: 6: Memory_Owner dump.
nTotal Frames 1703936 Registered = 75,  maxOwners = 128
1408979513: KERNEL: 6:   0 (   0 frames) No owner, Dump priority 6
1408979513: KERNEL: 6:   1 (3386 frames) Free list, Dump priority 0
1408979513: KERNEL: 6:   2 (40244 frames) malloc heap, Dump priority 6
1408979513: KERNEL: 6:   3 (6656 frames) physMemOwner, Dump priority 7
1408979513: KERNEL: 6:   4 (36091 frames) Reserved Mem based on E820, Dump priority 0
1408979513: KERNEL: 6:   5 (96248 frames) Address gap based on E820, Dump priority 0
1408979513: KERNEL: 6:   6 (   0 frames) Rmode isr vectors, Dump priority 7


Note from Tanny:

This post did not work, but is worth sharing…. in my case it was a matter of just bringing the storage resource online in the cluster resource manager.

For a 2008R2 Clustered environment, take look at the cluster resource manager.
In my case it was a matter of bring the storage resource on line.  We swing a LUN from different servers for quick backups and restores.  The instructions did not work, but usually after presenting the LUN to the Cluster or any of the stand alone environments, a quick scan will bring the disk online and keep the previous drive letter.

Source: (Repost from the Happy SysAdm Blog)The disk is offline because of policy set by an administrator

You have just installed or cloned a VM with Windows 2008 Enterprise or Datacenter or you have upgraded the VM to Virtual Hardware 7 and under Disk Management you get an error message saying:
“the disk is offline because of policy set by an administrator”.
This is because, and this is by design, all virtual machine disk files (VMDK) are presented from Virtual hardware 7 (the one of ESX 3.5) to VMs as SAN disks.
At the same time, and this is by design too, Microsoft has changed how SAN disks are handled by its Windows 2008 Enterprise and Datacenter editions.
In fact, on Windows Server 2008 Enterprise and Windows Server 2008 Datacenter (and this is true for R2 too), the default SAN policy is now VDS_SP_OFFLINE_SHARED for all SAN disks except the boot disk.
Having the policy set to Offline Shared means that your SAN disks will be simply offline on startup of your server and if your paging file is on one of this secondary disks it will be unavailable.
Here’s the solution to this annoying problem.
What you have to do is first to query the current SAN policy from the command line with DISKPART and issue the following SAN commands:
= = = = = = = = = = = = = = = = = =
SAN Policy : Offline Shared
= = = = = = = = = = = = = = = = = =
Once you have verified that the applied policy is Offline Shared, you have two options to set the disk to Online.
The first one is to log in to your system as an Administrator, click Computer Management > Storage > Disk Management, right-click the disk and choose Online.
The second one is to make a SAN policy change, then select the offline disk, force a clear of its readonly flag and bring it online. Follow these steps:
= = = = = = = = = = = = = = = = = =
DISKPART> san policy=OnlineAll
DiskPart successfully changed the SAN policy for the current operating system.
Disk ### Status Size Free Dyn Gpt
——– ————- ——- ——- — —
Disk 0 Online 40 GB 0 B
* Disk 1 Offline 10 GB 1024 KB
DISKPART>; select disk 1
Disk 1 is now the selected disk.
Disk attributes cleared successfully.
DISKPART> attributes disk
Current Read-only State : No
Read-only : No
Boot Disk : No
Pagefile Disk : No
Hibernation File Disk : No
Crashdump Disk : No
Clustered Disk : No
DiskPart successfully onlined the selected disk.
= = = = = = = = = = = = = = = = = =
Once that is done, the drive mounts automagically.
  1. So, I’m trying all this but the return message I get in disk part is “DiskPart failed to clear disk attributes.”. Any further advice?

    DISKPART> san policy=OnlineAll

    DiskPart successfully changed the SAN policy for the current operating system.

    DISKPART> rescan

    Please wait while DiskPart scans your configuration…

    DiskPart has finished scanning your configuration.

    DISKPART> select disk 1

    Disk 1 is now the selected disk.

    DISKPART> attributes disk clear readonly

    DiskPart failed to clear disk attributes.

    DISKPART> attributes disk
    Current Read-only State : Yes
    Read-only : Yes
    Boot Disk : No
    Pagefile Disk : No
    Hibernation File Disk : No
    Crashdump Disk : No
    Clustered Disk : Yes

    DISKPART> san

    SAN Policy : Online All

    (Note from Tanny take a look at the Cluster resource manager and bring storage resource online)

  2. I see you problem. Have you checked that you have full access to the volume you want to change attributes for? Is it a cluster resource? I think so because your log says “clustered disk: yes”. In this case you should stop all nodes but one and then you will be allowed to use diskpart to reset the flags. The general idea is to grant the server you are connected to write access to the volume.
    Let me know if you need more help and if, so, please post more details about you configuration (servers and LUNs).


  3. I am having this same problem. It is in cluster and I have shut down the other node. I am still unable to change the read only flag.
    Please help?!

  4. Wacky problem – a SAN volume mounted to a 2008 (not R2) 32bit enterprise server had been working fine. After a reboot of the server, the disk was offline. Putting it back online, no problem, diskpart details for the volume show “Read Only: No”. Got support feom Dell and foud the the Volume was listed as Read Only. Simple fix, change the Volume to “Read Only: No” with Diskpart. 4 hours later, the Volume is marked as “read only” again.No chnages made by us, nothing in the Windows logs.
    The disk is an Dell/Emc SAN LUN, fiber connected, exclusive use to this machine. Have another LUN, almost the same size attached the same way to this machine, no problems with that. Appreciate any thoughts or places to look.


  5. Ahhh, nice! A perfect tutorial! Thanks a lot!


  6. Great article! I just spent 2 hours trying to figure out why my san disks weren’t showing and this was the fix.

    Thank you!


  7. Thank you, thank you, thank you! This article helped me with an IBM DS3000 and an IBM System x3650M3 Windows Server 2008 R2. Thumbs up to you! I’d be still trying to figure why I couldn’t configure these drives!


  8. These settings are good for window server 2008 R1 and R2. It breaks again with R2 SP1 ;-(. Is there any solution for R2 SP1?


  9. Thanks. Very helpful.


  10. Wonderful article..thanks a lot dude!!


  11. This worked perfectly for me. I tried figuring it out on my own but just couldn’t get it to work within VMware Workstation.


  12. let me know i how to remove is read only attribute and bring online. if i access san directly then it possible.
    i have two server in one server its show online but in second server its display reserved the disk offline message.

    I’m also trying all this but the return message I get same problem in disk part is “DiskPart failed to clear disk attributes.”. Any further advice?

    DISKPART> san policy=OnlineAll

    DiskPart successfully changed the SAN policy for the current operating system.

    DISKPART> rescan

    Please wait while DiskPart scans your configuration…

    DiskPart has finished scanning your configuration.

    DISKPART> select disk 1

    Disk 1 is now the selected disk.

    DISKPART> attributes disk clear readonly

    DiskPart failed to clear disk attributes.

    DISKPART> attributes disk
    Current Read-only State : Yes
    Read-only : Yes
    Boot Disk : No
    Pagefile Disk : No
    Hibernation File Disk : No
    Crashdump Disk : No
    Clustered Disk : Yes

    DISKPART> san

    SAN Policy : Online All


  13. Exactly the answer I was looking for!


  14. Well done – fixed me right up.


  15. Perfect answer for a vexing problem. I had no clue where to look for


  16. This is really helpful article ! Many Thanks.


  17. Thanks for your reply!

  18. Thanks, this was very helpful for me.


  19. Hi same problem here, the disk says its a clustered disk but i don’t have it in the Failover cluster manager. Its just a dedicated disk to one server from the san.. have cleared simultaneous connections and only one server is connected now but still won’t come online. Any help would be great.


  20. Anonym

Zero to 5000 Citrix VDI Users Logged-in and Working in Just 30 Minutes! (repost from Cisco Blog > Data Center and Cloud)

Making sure your users don’t go to sleep (or worse) waiting to log-on
Hi Everyone! I am the team lead Technical Marketing Engineer for Cisco Virtual Desktop Infrastructure (VDI) solutions on UCS and Nexus. While I have done some blogging in my time – this is my first blog for Cisco. I have been in this space for over 22 years, before “virtualization” was called that, working with published applications and published desktops (MetaFrame and early RDP.)
With the Citrix and EMC teams, I have been focused for the past few months on validating what I think is a really exciting solution – even if I say so myself. So recently not much time for blogging I am afraid.
Over the last couple of years we have seen desktop virtualization, specifically Hosted Virtual Desktops (HVD,) become increasingly more mainstream – but today we are really experiencing an upsurge of deployments – and not just pilots – but full blown multi-thousand seat deployments.
As you are probably aware the worst nightmare is that you deploy the solution and the users don’t adopt it because it doesn’t provide them the user experience they need or want.
One of the key requirements for success is an infrastructure that won’t just provide the right experience for the first few hundred users – but that will scale linearly as you grow into the many thousands.
You can rely on Cisco Validated Designs to deliver for you! We use real world test scenarios to insure that you can implement our designs in your environment and be successful.
The keys to a successful deployment of a large scale HVD environment start with:

• Detailed characterization of the virtual workloads
• Desktop Broker that supports efficient streaming capabilities
• Reliable, fast User Profile management
• Compute platform that provides linear scalability, rapid expandability, and excellent management tools across hundreds to thousands of servers
• Network infrastructure that provides the right amount of bandwidth to the right traffic
• Storage system that is capable of efficiently handling massive IOs, both on the read side
for boot up and the write side for HVD ramp up and steady state
• A robust hypervisor capable of supporting advanced capabilities required for HVDs
• Fault tolerance at all levels of the solution, producing a highly available system

Cisco UCS together with Citrix technologies, EMC VNX storage, and VMware vSphere provide the key foundation for a high performance, highly available HVD environment:
• Login VSI 3.6 Medium workload was used to represent a typical knowledge worker
• Citrix XenDesktop 5.6 FP1 with Citrix Provisioning Server 6.1 provided the ultimate desktop streaming technology with the smallest storage footprint
• Citrix User Profile Manager was used to manage 5000 unique desktop user profiles
• Cisco UCS B230 M2 blade servers provided awesome compute resources and Cisco UCS 6248UP Fabric Interconnects (FIs) managed server hardware, network and storage for the environment.
• Cisco UCS Service Profile Templates and Service Profiles made server deployment fast, efficient and insured that each blade was provisioned exactly the same as the next.
• Cisco UCS Manager, with tight integration with VMware ESXi, handled management of all of the blades across the 5 VMware clusters used in our solution seamlessly
• Cisco Nexus 5548UP Access Switches and (for the first time in a Cisco VDI CVD) Cisco Nexus 1000V distributed virtual switches in conjunction with our FIs provided end to end Quality of Service for all traffic types from the HVD through the hypervisor, the FIs and through the Nexus 5548UPs – all at 10 GE or 8 Gb FC!
• EMC VNX7500 with Fast Cache, provided the outstanding read and write IO to support 5000 HVDs boot up, ramp up, steady state and log off
• For the first time in a Cisco VDI CVD, our design provides N+1 server fault tolerance at the VMware cluster level. Another real-world differentiator for Cisco!

Here is a look at the hardware used in the solution:


The highlight benefits of the joint validated design for deploying a scalable Citrix XenDesktop include the following:



I will be writing more about the in depth details of our Zero to 5000 solution in the coming weeks. Please let me know what you are interested in exploring!

For more information download the Cisco Validated design http://www.cisco.com/en/US/docs/unified_computing/ucs/UCS_CVDs/citrix_emc_ucs_scaleVDI.pdf

And for more information on Cisco VXI solutions for desktop virtualization go to http://www.cisco.com/go/vxi