Aficionado Suspended 22518 Posts user info edit post |
(if you were asked to give up your dreams for freedom?)
so my advisor is getting around to purchase a cluster and i have a couple questions for you network experts (lulz)
outbound: master → slaves inbound: slaves → master
we have three types of problems that we will run:
stochastic: very low network overhead both ways deterministic: tons of matrix computations, low outbound, extremely high inbound (one cluster has infiniband for the inbound, but we are being cheap) and a coupling of both of those
since all the systems we are considering have dual gigabit network ports, we have kicked around the idea of having two separate networks, one for the outbound and one for the inbound
ok, you know the problem...what switch would you recommend for this where the requirements are rack mountable and not $texas (where $texas is a relative value, we are looking at systems right now that are about $50k with the option to purchase more slaves later, maybe 32-38 slaves total)
they dont have to be homogeneous either, if we can save on the low overhead side with something cheaper, im all for it and if it is managed, i would like to be able to do it remotely when i connect to the master 10/18/2008 10:14:55 AM |
Aficionado Suspended 22518 Posts user info edit post |
so 35 views and no one knows anything
ok
lock this thing up 10/20/2008 11:48:59 PM |
BobbyDigital Thots and Prayers 41777 Posts user info edit post |
maybe it's because i'm really really tired, but i don't understand what two networks would buy you given that gigabit switches are full duplex.
As long as you had a nonblocking switch, you can transmit and receive at line rate on all ports. 10/21/2008 12:05:14 AM |
Aficionado Suspended 22518 Posts user info edit post |
so there arent overall switch bandwidth limits?
we are going to be moving a shitload of data. i just wasnt sure if when you have a 48 port switch that is gigabit, does that mean that if they are all going full bore that there isnt going to be an issue with overall system bandwidth?
[Edited on October 21, 2008 at 12:15 AM. Reason :
10/21/2008 12:12:20 AM |
BobbyDigital Thots and Prayers 41777 Posts user info edit post |
if it's a nonblocking switch, yes you could run every port at line rate in both directions and be fine. However, those switches are fairly costly. One example is the Cisco 4948 switch:
http://www.cisco.com/en/US/prod/collateral/switches/ps5718/ps6021/product_data_sheet0900aecd8017a72e.html
I think they run around 10 grand.
If the ports are oversubscribed, which is usually the case at lower price points, then you're right -- You'll overrun the switch backplane or at least the port ASICs, but this will be the case even if you run two separate unidirectional networks. 10/21/2008 12:35:50 AM |
wut Suspended 977 Posts user info edit post |
Bobby Im highly sedated right now so forgive me if Im wrong but:
3750's are actually designed to be oversubscribed and pending on what model you choose to buy (certainly some are uber expensive) you can have all gigE, or SFP ports, or all 10GE interfaces, and up to 64 Gbps backplane bandwidth.
Oh and you can just buy stack cables if scalability is a concern.
I have no idea how much they cost though
Im not sure if I follow what Bobby is talking about nonblocking/unidirectional networks. I think my meds put me in space cadet mode... 10/21/2008 1:00:02 AM |
evan All American 27701 Posts user info edit post |
^they're oversubscribed with the assumption that you will not be using the full bandwidth of every port at once... which, in most cases, is true, so you can get away with it.
in Aficionado's case, however... he'd overrun the switch.] 10/21/2008 1:02:15 AM |
wut Suspended 977 Posts user info edit post |
ohh my bad 10/21/2008 1:03:47 AM |
evan All American 27701 Posts user info edit post |
meds will do that to you 10/21/2008 1:05:44 AM |
wut Suspended 977 Posts user info edit post |
What if he orders the 3750-E (I think?) line which has 64 Gbps backblane... would he utilize all of that you think? 10/21/2008 1:30:41 AM |
evan All American 27701 Posts user info edit post |
a 64gbps backplane should, in theory, support 32 full duplex gigabit connections... this depends on other things, though
so... yeah. honestly, i think he could get away with it.
personally, i'd go for a 4500 series with a sup2+ or the sup2+ w/ 10gbe (if i recall correctly, i think it can handle a considerable bit more bandwidth than the straight up sup2+... i could be wrong, though)... only downfall is i think each slot can only do 24gbps...
the 3750-e might be a better option now that i think about it... isn't it nonblocking?
[Edited on October 21, 2008 at 2:50 AM. Reason : 3750E-48TD] 10/21/2008 2:46:52 AM |
wut Suspended 977 Posts user info edit post |
1. Attn Bobby, Re: ^ post
2. wait
3. profit.
**Its been 3 years maybe since Ive gone to our LAN swiching bootcamp where all that was explained. I still have the material but yea... doesnt do my memory much good. 10/21/2008 3:57:15 AM |
Aficionado Suspended 22518 Posts user info edit post |
i think that the most slaves that we would have is 31 so that a 32 port switch would be all we would need; however, could i utilize the 10 Gbps uplink port to the master and get one more slave on the switch? 10/21/2008 8:23:39 AM |
wut Suspended 977 Posts user info edit post |
I want to say yes, but not in a stacked configuration. Better wait for a more appropriate answer. 10/21/2008 11:32:54 AM |
evan All American 27701 Posts user info edit post |
^ is right
do you have a 10gbe nic in your master? 10/21/2008 11:40:04 AM |
Aficionado Suspended 22518 Posts user info edit post |
what is the availability of 10 Gbps network adapters? 10/21/2008 1:00:54 PM |
evan All American 27701 Posts user info edit post |
intel makes some
last time i checked, they were around $5k
[Edited on October 21, 2008 at 1:36 PM. Reason : looks like you can pick up an xfp-based one for $2k-ish now] 10/21/2008 1:34:28 PM |
Aficionado Suspended 22518 Posts user info edit post |
ok
well this is getting really expensive very quickly 10/21/2008 1:44:47 PM |
evan All American 27701 Posts user info edit post |
welcome to the joys of HPC 10/21/2008 1:52:37 PM |
BobbyDigital Thots and Prayers 41777 Posts user info edit post |
Quote : | "What if he orders the 3750-E (I think?) line which has 64 Gbps backblane." |
3750-E's are non-blocking as well -- 128Gbps backplane.
Quote : | "only downfall is i think each slot can only do 24gbps..." |
actually with any supervisor other than a sup6, each slot is limited to 6gbps. The 4500 modular series is not designed for this type of application.
you could fill a 4507r with 6 port gig cards, but that's going to be too costly, and require too much power and rackspace.
Best bets, at least as far as cisco products go are the 4948 or 3750-E series switches. I don't have much awareness of other vendors, so I can't help much there.10/22/2008 10:32:43 AM |
evan All American 27701 Posts user info edit post |
^
Quote : | "the 3750-e might be a better option now that i think about it... isn't it nonblocking?" |
10/22/2008 11:56:48 AM |
wut Suspended 977 Posts user info edit post |
Called it 10/22/2008 1:39:51 PM |
cain All American 7450 Posts user info edit post |
you can go balls out and get a cisco nexus 5020, its 52 10g ports.
I'd recommend the 4948 for your problem description however. As far as 10gig to host, i dont know that you can actually run those host up to the 10g up links on the 4948 or the 3750-e. I'd be interested to know the answer on that part actually 10/22/2008 2:16:18 PM |
evan All American 27701 Posts user info edit post |
i was under the impression the 10gbe links could be stuck in any vlan or used for trunking 10/22/2008 6:23:50 PM |