Anyone out there have examples as to how they've virtualized Exch 2010? Ram requirements?

  • Thread starter markm75g
  • Start date Views 1,330
Status
Not open for further replies.
M

markm75g

I'm still digging to find this exact info.. i've seen the guides on virtualization.. the guides on requirements etc..
But i'm looking for some real world input here...
We have 40 users.. I am going to virtualize 2010.. probably wont initially use the UM role..
Currently we use 8GB of memory on the single box for 2007 exchange.. i'm assuming i'll install the vm or vm's for exchange 2010 then do a migration over, over time of the mailboxes (while changing the ip/smtp settings to point to the new exchange 2010 hub area?)..
Can anyone tell me what a good amount of ram would be to allocate in our case?
Would i want the CAS\HUB role on one VM and the mailbox on its own? Or should i do it all on one?
Right now its all in one.. opening of ports for owa (no DMZ), but i am going to go dmz on the new one.. (so i'm guessing the cas/hub will live in the dmz, not joined to the domain?)
Ram wise, i wasnt sure how much to allocate to each box.. ie: who needs more than the other.. etc
Thanks for any tips
 
B

Brian Day MCITP [MVP]

Can anyone tell me what a good amount of ram would be to allocate in our case?
I know this isn't what you're looking for exactly, but you should allocate just as much memory as you would for a physical machine to run properly. Even though you are virtualizing the box, you are still askin it to perform the same amount of work.

I would point you to two resources to help get a better idea of what to do.

http://msexchangeteam.com/archive/2010/01/22/453859.aspx

http://msevents.microsoft.com/CUI/WebCastEventDetails.aspx?culture=en-US&EventID=1032428204

Brian Day, Overall Exchange & AD Geek
MCSA 2000/2003, CCNA
MCTS: Microsoft Enterprise Server 2010, Configuration
MCITP: Enterprise Messaging Administrator 2010
LMNOP
 
M

markm75g

Can anyone tell me what a good amount of ram would be to allocate in our case?
I know this isn't what you're looking for exactly, but you should allocate just as much memory as you would for a physical machine to run properly. Even though you are virtualizing the box, you are still askin it to perform the same amount of work.
I would point you to two resources to help get a better idea of what to do.
http://msexchangeteam.com/archive/2010/01/22/453859.aspx
http://msevents.microsoft.com/CUI/WebCastEventDetails.aspx?culture=en-US&EventID=1032428204
Brian Day, Overall Exchange & AD Geek
MCSA 2000/2003, CCNA
MCTS: Microsoft Enterprise Server 2010, Configuration
MCITP: Enterprise Messaging Administrator 2010
LMNOP
Thanks for the links.. i had already been through the second one, the web cast..
I am treating them as a physical for ram anyway.. but what i'm not getting out of any of these guides (for some reason).. is the best way (or whether to at all) divide the roles up into multiple vm's or how much ram i should figure on allocating to each vm (from that spreadsheet, i didnt see at first glance, a ram calculator)..
As far as allocating cpu cores.. i've seen that up to 4 per vm are recommended.. (as of now we only allocate one per machine on an 8 core total host.. with about 11 vms running.. I can probably spare 2.5 on average per vm if just two (ratio 2:1, 16 cores) later on we will add a 3rd host hyper-v server and i can do more.
 
M

Mike Pfeiffer

This may be of some help:
Understanding Memory Configurations and Exchange Performance
http://technet.microsoft.com/en-us/library/dd346700.aspx
It provides minimum and recommended memory configurations for each role, as well as multiple role combos.
 
S

sketchy01

You'll get lots of answers saying that "it depends" (well, of course it does). If you were like me, that wasn't what you were looking for.

I have some posts of my implementation of exchange 2007 on virtualized environment for about 50 users/100 mailboxes. I think you might find them interesting. I posted the information because it seemed as if nobody else would.

http://itforme.wordpress.com/2010/01/03/resource-allocation-for-virtual-machines/

http://itforme.wordpress.com/2009/10/05/exchange-2007-better-late-than-never/

In a nutshell. A VM running 2.5GB of RAM with a single vCPU serves up all Exchange roles with running at about 40% cpu utilization during business hours.

I'm going to respectfully disagree with the person that posted that you'd size it just as you would on a physical box. Don't do it!!! It is that reason that most will over allocate resources when they are new to virtualizing their services. Start small, and work your way up, only if your monitoring of resources indicated that you need to.
 
B

Brian Day MCITP [MVP]

I'm going to respectfully disagree with the person that posted that you'd size it just as you would on a physical box. Don't do it!!! It is that reason that most will over allocate resources when they are new to virtualizing their services. Start small, and work your way up, only if your monitoring of resources indicated that you need to.


I'd be more than open to hearing the reasons not to. :) If the box is setup for xxxx users and requires xxxx memory to perform the job at hand, why wouldn't you give it that? If you're in a vSphere'ish type hypervisor you'll get page sharing anyways so oversubscription isn't (always) a bad thing. Please keep in mind I beleive in properly sizing memory for a physical box in the first place so I wouldn't just toss 64GB in there and say "That should do it!" then allocate the same for a VM. :)Brian Day, Overall Exchange & AD Geek
MCSA 2000/2003, CCNA
MCTS: Microsoft Enterprise Server 2010, Configuration
MCITP: Enterprise Messaging Administrator 2010
LMNOP
 
S

sketchy01

Because the calculation of say, xxx users calculates out to say, 3GB of RAM needed. When commmisioning that server, if you had 8GB in there, does anyone pull out that RAM? Of course not. The system may not allow you to even if you wanted to. So Administrators everywhere will equate a current system that runs 8GB of RAM. Well then, the new system better have 12 or 16GB just in case. See how that snowballs? Jump out on to any of the virtualization forums, and youl'll see loads of well intentioned administrators applying this practice to a virtualized environment. They they wonder why everything is dog slow any they only were able to get a few systems virtualized onto one host, instead of many many times more. Turns out they associate 4 cores and 12GB of RAM to a VM because they thought more must be better.

The difference is more about the practicalities of provisioning a physical server. In the physical world, you have one server that is going to be doing this work, you typically throw in as many resources as the budget allows because you have to make it last for the 3 or so years that it's going to be a production system. You are also limited in resource increments of the physical server. (e.g. 4GB of RAM versus 2.6GB or 4 cores instead of 1). I've purchased and provisioned hundreds and hundreds of servers over the past 15 years, and I've never seen a case where one would be commissioning a new exchange server, and you'd pull RAM out of there because one thinks its only going to take 75% of the RAM. The purchasing and provisioning always errors on the side of overallocate. Especially when the administrator *might* be running other services on there.

By the way, transparent page sharing (TPS) is typically believed to save about 30% of total RAM allocated on VM's with like operating systems. It's fantastic, but can be eaten up/offset by poorly provisioned VM's.

I absolutely agree with you that your criteria will dictate the resources needed, but through careful observations as well. So perhaps its just a matter of symantics, but I just wanted to throw in my two cents to the original poster to not get caught in that trap.
 
B

Brian Day MCITP [MVP]

I absolutely agree with you that your criteria will dictate the resources needed, but through careful observations as well. So perhaps its just a matter of symantics, but I just wanted to throw in my two cents to the original poster to not get caught in that trap.

It's all very good information and I'm glad you took the time to write it for everyone. As I tried to get across before, when we stand up new machines most times they're new deployments and are ordered with what we deem the appropriate amount of memory in them from the get-go. We try to use our servers long enough so that by the time their purpose is done they are decomissioned and sent out to pasture. I understand and agree that probably isn't the case for many folks, so if I take my own personal work environment blinders off for a moment I can definitely see where people can get trapped from my previous comment. :)Brian Day, Overall Exchange & AD Geek
MCSA 2000/2003, CCNA
MCTS: Microsoft Enterprise Server 2010, Configuration
MCITP: Enterprise Messaging Administrator 2010
LMNOP
 
M

markm75g

Thanks for the links and input.. this is great.. more than i've been able to find just from casual searching.
Just a tad more background on our servers.. they are quad core xeon dual cpu boxes, custom built supermicro 4U chassis with 16 drive bays, that can take both sataII and sas, have about 16GB and 24GB currently.. i am actually running about 12 vms on each box, all assigned one single vcpu each.. i guess since the load is spread so thin (generally one role per vm), they never peak at more than say 15% of their vcpu %.. and overall on the main cpu, that barely blips.. we only have 40 users, use raid6 data arrays (1 spindle generally for all vhds.. though yeah i should break it up).. been tossing around the idea of a san before adding a 3rd host hyperv server..
so the ram is at about 15GB used of 16 on one and about 20GB of 24GB on the other (but they can expand to 64 or 128GB each).. i dont want to overbudget ram on this, so based on some of these stats, i'll probably set things smaller and watch performance... still tempted to equate at least 8GB if all on one vm as 2007 is now..
It appears though, maybe if i break it up into VM1: CAS/HUB; VM2: Mailbox VM3:UM(if i ever use this) and maybe VM4: edge(dmz).. the vm4 is tricky, yes i've heard i shouldnt create vm's that are dmz, unless the host is entirely a dmz host (dont just use two nics/virtual nics), but budget may dictate i have to for now..
*Does OWA live on the CAS role, i cant recall.. i thought there was a way to get the OWA out in the dmz? (edge role?).. we dont use edge or dmz currently, but our sonicwall 2040 supports a dmz out port basically, where i could set up "ISA" style reverse proxy things if need be too (locking everything down then opening back up)?
Right now all is on one box, runs fine.. it would seem there is no reason not to just do the same, though yes, the best practice guides and restrictions on number of mailboxes say otherwise (not sure why combining restricts mailboxes like i thought i read).. we only have 40-50, may grow to say 90 mailboxes over 5 to 10 years (guess).
If i just go with 1 core.. it sounds like both the VM for cas/hub and the mailbox VM could be set to as high as 4GB of ram each (total of 8GB min more needed.. or maybe less based on real world tests ;) ) Again, its about trying to balance these vm's across our "loaded" (but not really cpu loaded) hosts, and budgetting the correct amount of ram to add, to get us by until we get that 3rd host (and maybe a san).
Just unclear on the dmz/owa thing
 
S

sketchy01

Sounds like you'll be fine. You'll notice that RAM is going to *probably* be what you will run out of first, so that is why I would start conservative on that. Second to that is disk I/O if that isn't set up well. About the only time you might find CPU cycles being eaten up are if you have some on-host backup stuff going, or maybe some AV. Other than that, I've found the pattern of usage to be very consistent. Can't say from experience myself, but from the sounds of it, the Exchange Team has done a great job with reducing I/O in general.

Since the VM world really lends itself to the scale out versus scale up approach, you could just do as you planned, lump them all on one VM for right now, then as your needs grow, you scale by moving some of those Exchange roles onto different VM's. And as you refered to, splitting them off, especially to DMZ's can get tricky when you are trying to isolate and secure them.

I had enough to tackle from transitioning to Exchange 2003 to 2007 that my primary intention was to just have a single server running all roles, then I'd go from there based on resources. The practical observations has been that there is no need for me to split them off at this time. That's great, because I have way too many other things to do. So my OWA comes from my primary server (some Exchange guy would probably hit me for this), and then I used an ISA gateway to publish the service. Actual email gets out to a mail relay server (virtualized, but not in my cluster), and completely segregated from my networks and vlans. The relay server isn't an Exchange server.

The world will really open up for you when you get a SAN.
 
S

sketchy01

You are far better than I Brian. I'm usually afraid to ask for more RAM after I made them buy a new system. So I'd sneak as much in as I could. :)
 
M

markm75g

Sounds like you'll be fine. You'll notice that RAM is going to *probably* be what you will run out of first, so that is why I would start conservative on that. Second to that is disk I/O if that isn't set up well. About the only time you might find CPU cycles being eaten up are if you have some on-host backup stuff going, or maybe some AV. Other than that, I've found the pattern of usage to be very consistent. Can't say from experience myself, but from the sounds of it, the Exchange Team has done a great job with reducing I/O in general.
Since the VM world really lends itself to the scale out versus scale up approach, you could just do as you planned, lump them all on one VM for right now, then as your needs grow, you scale by moving some of those Exchange roles onto different VM's. And as you refered to, splitting them off, especially to DMZ's can get tricky when you are trying to isolate and secure them.
I had enough to tackle from transitioning to Exchange 2003 to 2007 that my primary intention was to just have a single server running all roles, then I'd go from there based on resources. The practical observations has been that there is no need for me to split them off at this time. That's great, because I have way too many other things to do. So my OWA comes from my primary server (some Exchange guy would probably hit me for this), and then I used an ISA gateway to publish the service. Actual email gets out to a mail relay server (virtualized, but not in my cluster), and completely segregated from my networks and vlans. The relay server isn't an Exchange server.
The world will really open up for you when you get a SAN. Yeah, the more ram you give it, the more it uses, this by design (to eat it all up to 100%) from what i've read since 2007 came out..
Yep maybe putting those roles on one box will be ok.. or maybe just lowering the ram on vm#2 for cas/hub for now, expanding in 6 months if it thrashes/cpu spikes alot.
I guess in my situation my relay "server" is the sonicwall gateway.. which would probably be the equivalent of your isa server.. so you dont have a "perimeter/dmz" zone per say then either?
I wasnt able to clarify, exactly, how you "dmz" the owa part, since, it is part of the main server, at least if all roles are in one, or even if not, that ends up on cas role does it not (perhaps this implies that role in the dmz with the vm joined to AD)?
*and yes, i cant wait for the SAN.. it appears a SAN with around 6-8TB (we have 4.8TB of data) and expandable/dual controller, decent performer, would run us around 14k, thats with say 60% bays filled in the first chassis (if it had 12).. 14k for a 32-40 person company is a bit steep, but i'm seeing they may be willing to consider it. (IE: envisioning an HP system here, i've heard that the ones like the promise vtrak are better suited as backup type devices).. as these hosts servers cost around $5200 with the space they have in em, that could be cut back to a small blade next time :) (probably still $3200, but a savings no less).
 
S

sketchy01

You'll notice that RAM prepopulation thing more prevalent in Win2008 and the like. SQL 2008 has a tendancy to do the same thing if you let it. That is the other danger of overcomitting resources. I'm just taking a quick look at my Exchange server again, and with 2.5GB of RAM, and 1 vcpu running all roles, it's averaging 25% CPU utilization and about 30% of active memory over the past week time frame. This is for about 50 users.

Sorry for not explaining my topology better (I didn't want to assume that you wanted to know the details). Here it is. ISA 2006 server acting as my perimeter/gateway (equiv to your sonicwall). On the Internal network segment I have my Exchange Server, AD controllers, DNS, etc. On a DMZ segment that comes off another port on the ISA box, I have a mail relay server, web & ftp server. Inbound mail hits that mail relay server first (which does a lot of spam filtering, etc.), then inward to my Exchange server. The reverse is true for outbound mail. All of my servers on my DMZ network are NOT a part of my internal domain. My ISA box is a member of the domain (this was highly debated years ago, but the security geeks seemed to have come to the conclusion, that of which I agree, that it is better to have that ISA box joined).

For my Exchange server running all roles, I do not use any of my perimeter/dmz networks for OWA. I do the publishing of the service (ISA speak) of OWA so that it interacts with the Exchange server correctly (using the right authentication and proxy methods, etc.).

Be sure to factor in snapshot capacities when figuring out how much SAN space you need. Check out Dell/Equallogic's PS4000 series of iscsi based SAN arrays. They scale in capacity and I/O beutifully, and since the feature base is all inclusive (no add-on SKU's to your bottom line based on features that you might or might not want), are very simple.
 
M

markm75g

You'll notice that RAM prepopulation thing more prevalent in Win2008 and the like. SQL 2008 has a tendancy to do the same thing if you let it. That is the other danger of overcomitting resources. I'm just taking a quick look at my Exchange server again, and with 2.5GB of RAM, and 1 vcpu running all roles, it's averaging 25% CPU utilization and about 30% of active memory over the past week time frame. This is for about 50 users.
Sorry for not explaining my topology better (I didn't want to assume that you wanted to know the details). Here it is. ISA 2006 server acting as my perimeter/gateway (equiv to your sonicwall). On the Internal network segment I have my Exchange Server, AD controllers, DNS, etc. On a DMZ segment that comes off another port on the ISA box, I have a mail relay server, web & ftp server. Inbound mail hits that mail relay server first (which does a lot of spam filtering, etc.), then inward to my Exchange server. The reverse is true for outbound mail. All of my servers on my DMZ network are NOT a part of my internal domain. My ISA box is a member of the domain (this was highly debated years ago, but the security geeks seemed to have come to the conclusion, that of which I agree, that it is better to have that ISA box joined).
For my Exchange server running all roles, I do not use any of my perimeter/dmz networks for OWA. I do the publishing of the service (ISA speak) of OWA so that it interacts with the Exchange server correctly (using the right authentication and proxy methods, etc.).
Be sure to factor in snapshot capacities when figuring out how much SAN space you need. Check out Dell/Equallogic's PS4000 series of iscsi based SAN arrays. They scale in capacity and I/O beutifully, and since the feature base is all inclusive (no add-on SKU's to your bottom line based on features that you might or might not want), are very simple.
Awesome, thanks for the details..
Yeah for inbound mail we also have the sonicwall antispam device (relay).. so sounds like you have, what i think is referred to as a 3 legged dmz.. which is basically what i will do, using the sonicwall instead of ISA, though i guess i could provide the ISA server on the inside for inside dmz to lan protection, but i think its overkill (maybe not?)..
So i wonder how you dmz the OWA portion though.. i know there is the edge role, but i think (someone correct me if wrong), the edge role is akin to my sonicwall antispam device.. putting a relay out in the dmz.. so for us, this is a double take and not needed
Do you agree that if using vm's for the dmz area.. that technically those vm's should be on a dmz only host, not dual nics on same box? (technically again).
Your San i see is iSCSI.. i was probably going to go fiber channel.. yeah its around $5500 up front for the equip, but after that its mitigated on costs.. meaning each new HP chassis we add is around $8000 (i think).. the hp does that controller based snapshotting.. i've heard add about 60% storage to your san if you do this.. however, we use DPM on a dedicated backup server, so i doubt i'd use snapshotting, or maybe limited.
Btw.. even using Brians calculator above, once i foudn the memory section, it did turn out for around 60 users, that 8GB for all roles in one was about the recommended ram.. so i can probably scale back, then outward (if clients report sluggishness or the cpu on the vm or vms hits 70% or higher etc).
 
S

sketchy01

I think you are fine on the setup you describe. One could probably shoot holes in that assesment, but there is a practicality here that also has to be considered. That is something that unfortunately deployment guides almost never address. Sounds like you got a pretty comfortable handle on the numbers, so I'd go with your gut on that.

Recommendations that DMZ based VM's should be on a DMZ host stem from two reasons. One is the security side, but the other is the topology. Let's say you have your vswitch and portgroup for your normal LAN traffic. You could create and assign another portgroup with a VLAN so the DMZ traffic will flow correctly, but more than likely you will still need a physical wire to uplink that VLAN to one of your legs on the DMZ segment. No problem there either, unless you have a mix of physical and virtual machines on that DMZ leg, as you need the physical server's OS to understand VLAN tagging. So the solution for a mixed VM/physical server on your DMZ segment is to have a simple/small switch comming off of your DMZ leg. Have two of your NIC's on your dedicated DMZ host going to that mini-switch. Then assign those two NIC's to their own portgroup. That way you have a nice big of redundancy as well. My apologies if I didn't make this as clear as it should be. Trying to get some work done right now.

Its almost a religious war out there on SAN technologies, so you may get some very passionate advice on the matter. Look around. Lots of vendors riding the Fiber Channel gravy train are running a bit scarred right now because of iSCSI. Fiber channel can get tricky because if you scale out the arrays, and it's using the same controller backplane, you'll get the response of "oh, you have to upgrade your controllers before you can stack more arrays on there." etc. Before you know it you are doing a psuedo rip & replace of the SAN for any sort of upgrade The equallogics scale out in capacity, and I/O with each array you add. Anyway, just something for you to look at. If you go with iSCSI, just be sure to have a minimum of 6 physical Network ports on each host (either 3 dual port NIC's, or 6 individual NICs, or a combo of both).
 
M

markm75g

I think you are fine on the setup you describe. One could probably shoot holes in that assesment, but there is a practicality here that also has to be considered. That is something that unfortunately deployment guides almost never address. Sounds like you got a pretty comfortable handle on the numbers, so I'd go with your gut on that.
Recommendations that DMZ based VM's should be on a DMZ host stem from two reasons. One is the security side, but the other is the topology. Let's say you have your vswitch and portgroup for your normal LAN traffic. You could create and assign another portgroup with a VLAN so the DMZ traffic will flow correctly, but more than likely you will still need a physical wire to uplink that VLAN to one of your legs on the DMZ segment. No problem there either, unless you have a mix of physical and virtual machines on that DMZ leg, as you need the physical server's OS to understand VLAN tagging. So the solution for a mixed VM/physical server on your DMZ segment is to have a simple/small switch comming off of your DMZ leg. Have two of your NIC's on your dedicated DMZ host going to that mini-switch. Then assign those two NIC's to their own portgroup. That way you have a nice big of redundancy as well. My apologies if I didn't make this as clear as it should be. Trying to get some work done right now.
Its almost a religious war out there on SAN technologies, so you may get some very passionate advice on the matter. Look around. Lots of vendors riding the Fiber Channel gravy train are running a bit scarred right now because of iSCSI. Fiber channel can get tricky because if you scale out the arrays, and it's using the same controller backplane, you'll get the response of "oh, you have to upgrade your controllers before you can stack more arrays on there." etc. Before you know it you are doing a psuedo rip & replace of the SAN for any sort of upgrade The equallogics scale out in capacity, and I/O with each array you add. Anyway, just something for you to look at. If you go with iSCSI, just be sure to have a minimum of 6 physical Network ports on each host (either 3 dual port NIC's, or 6 individual NICs, or a combo of both).
Well in theory i'd just have virtual servers on the dmz leg.. i was figuring on taking the port from the sonicwall (dmz) to switch (a separate switch) then that into the physical host server, but a nic port that would become the dmz virtual nic essentially.. for whichever virtual servers i deemed/associated with that dmz only nic/virtual nic.
I was thinking too.. like you mentioned, maybe on my existing shared switch.. i could just assign a few ports (well really just one for the dmz incoming port and the outgoing to a single nic on the host).. as VLAN.. but is a vlan just as good as having a physical separate switch for the dmz's?
On the San.. the hp one for instance, says you can stack multiple chassis, with no problems via fiber channel.. i've also heard that fiber channel tends to perform alot better than iscsi.. maybe this is mis-info.
 
S

sketchy01

I think whichever suites your eye right now will be fine. Many folks in the security world would deem that VLANs were never meant for security purposes. They certainly have many legit reasons for saying so. But at the same time, I think it's done quite often. ISP's use Cisco's VLAN Trunking Protocol to share Metro/e based circuits all the time. Although, I might not use ISP's as a good model of security minded folks. :) Anyway, it might be more straight forwared for just working with an initial, phsyical switch. Then you can change it later if you decide to clean up the configuration a bit.

Yeah, I didn't mean to imply that FC wouldn't scale, it's just that hidden cost of scaling can bite pretty hard. They'll push pretty hard to get you invested in the technology, then you get get dinged for everything you want to do (replication, snapshotting, expandability, monitoring, etc. I have a good friend who was a consultant deploying virtualized environments all over the place, and saw how some of the "scope creep" worked. He said it was unbelievable at times.

Yes and no on the better peformance. With a FC SAN, you might carve out LUN's with some being RAID1, others being RAID5, etc, trying to do all sorts of calculations hoping you figured out the IOPS right. The EQ unit works a different way. It uses all the spindles in the array for any given LUN you create. That high spindle count helps give good IOPS. 2, 3 port controllers give passive failover and good MPIO bandwidth. What's interesting on the EQ is that when you add a 2nd unit, you get double the backdoor bandwidth, and an increase in spindle count. If you have all SATA drives in one, then SAS or SDD drives in the other, it's smart enough to know which array provides the highest IOPS, and will dynamically move your heavily used LUNS to the faster performing one. And yes, I favor the EQ because it's all inclusive. There isn't one add-on that you have to worry about buying, because its all included.

The advantage of iSCSI is that it leverages one's traditional IP infrastructure, so for instance, replication is a breeze, as it can be routed, constrained, etc with comodity hardware Much of the performance issues go out the window as 10gige rolls out.
 
O

OliverMoazzezi [MVP]

I'm going to jump in here, and say this.

"Ram and CPU utilization for 50 users will vary WILDLY depending on the contention ratio for the Exchange Server and the User profile for the Exchange Server".

So, if i'm building an Exchange Server, I will work out the ram I need and adhere to that, and tweak if necessary.

I think it's stupid to liken Administrators putting physical boxes in their datacentres, (that more than likely came in a boxed options with all the ram in already), and putting this as a pro argument to memory over subscription.

The truth is, we should be pro-actively helping people that don't understand to do their homework and get their server CPU/RAM profiles _BEFORE_ the purchase of hardware, or if hardware is spare and sitting on the build table, at least give them the insight to build a Server correctly and not waste hardware on oversubscription.

Ross Smith IV and others have helped here with the Exchange Calculator, which will really give you real world scenarios (as best as it can, as it doesn't cater to third party products or other applications on the box) the link is here:

http://msexchangeteam.com/archive/2009/11/09/453117.aspx

I believe it is bad information to explicitly state to undersize your ram, what people should be saying is:

Do your homework
Profile your users as best as you can
Use the provide sizing recommendations from TechNet
Use the Storage Calculator
Finalise your design with your ratified CPU/RAM requirements.

I could have 1 Exchange Server with 50 users only using 2.5GB of ram, I could have another using 8GB (like the one I have here right now with 66 users on it and with only 80% concurrency!)

OliverOliver Moazzezi | Exchange MVP, MCSA:M, MCTS:Exchange 2010, BA (Hons) Anim | http://www.exchange2007.com | http://www.exchange2010.com | http://www.cobweb.com |
 
S

sketchy01

Oliver, I never suggested undersizing the RAM. I suggested setting it to the minimum correct size based on his environment. Having others report back different utilization numbers shouldn't be suprising to anyone. Since the original poster had stated he was looking for some real world input, that is what he got. It shouldn't be assumed that a person asking these questions has not done their homework. Unfortunately it speaks to a larger problem; the convoluted information on sizing that does not factor in today's deployment technologies. What in the world is he supposed to do with "Profile your users as best as you can?" How does he begin to quantify that and apply it to his use-case?

The Exchange Team appears to be doing some fantastic things, and those of us in the trenches are excited about that. From the perspective of those who have to make sense of some of this to put into production, there is still quite a bit of work to be done in the area understanding how these solutions are really deployed, and what information IT administrators really need.
 
Status
Not open for further replies.
Top