clalias All American 1580 Posts user info edit post |
I am trying to get some ideas on setting up a linux cluster. Basically just starting to collect ideas.
I’ll start with some background.
So basically our current configuration is as follows: We have , - 6 Fedora Core Linux servers that are for computing - 2 windows servers -- one serves the authentication and another maps the network storage - we have a network storage of about 5 TB (many HDDs in Raid 5) - KVM switch - 15 (about) windows desktop boxes that where the user sits
The storage space is mapped to the nix servers on a shared drive and also to each workstation. When we want to run our simulation we have to log into a particular nix server using putty. Then we can run a simulation. (I’ll get the downside of our current set-up later.)
We have several in-house developed scientific programs /simulations that we run for our customers. All of these, as well as many scripts to run them, run on Linux.
Previously, we did not have the windows servers. Our file servers were done via a samba share. However, we did seem to have problems often, particularly brining everything online after a power outage. The windows servers were added by our last IT guy (no longer with us btw) that was not very comfortable with linux. So we are looking for a good sysadmin but need to get more ideas first.
Ok, now to the issues.
-Having to log into each server directly creates a problem and waste time. We have to maintain 6+ putty windows to each server /log in each time to each one / flip back and forth and run code etc..
-Having linux try to create and manipulate files on a windows server has obvious problems. Linux tries to set permissions and windows rejects that idea which throws an error (not a show stopper but an annoyance).
For example, the command dos2unix does not work, because linux can’t create the file on the windows server because it can’t set the permissions. Our work around is to alias the dos2unix command to a perl script that performs this function.
-There is a time-sync problem between linux/windows that creates some very painful problems. The linux make command to build our simulation exe recognizes that the files have a time stamp of x seconds in the future and thus it will re-compile every
I have had a situation where the file I am editing on the windows side was not matching the file that I accessed on the linux side. I.e., I edit the file in notepad on my desktop and save and close the file, then open in vi on linux and the file is does not contain my latest edits.
What we would like.
-A true linux cluster (perhaps). Basically one point of entry to the cluster servers where we will have access to all of the processing capability from one interface. Avoid having to log into to multiple servers to run our code. -A linux file server share would be nice because of all the issues we have with the dual windows/linux paradigm. (time-sync, permissions problems, etc..)
[Edited on March 15, 2012 at 12:05 PM. Reason : .] 3/15/2012 12:03:59 PM |
mellocj All American 1872 Posts user info edit post |
Quote : | "What we would like. -A true linux cluster (perhaps)." |
what does this mean?
are you familiar with ssh keys?
heard of ldap ?3/15/2012 10:32:29 PM |
clalias All American 1580 Posts user info edit post |
what does this mean? Well, I was thinking like Beowulf.
are you familiar with ssh keys? very little. This network is NOT open to the internet. it's a self contained stand alone lab.
heard of ldap ? yes, I've heard of it. but that's as far as it goes. 3/15/2012 11:51:30 PM |
smoothcrim Universal Magnetic! 18966 Posts user info edit post |
you can use keypair auth + knownhosts for ssh and not worry about interactive logins. i'd also put linux on everything if you have no need for windows. what you might want is a grid, and not a cluster. you need to write tooling to hand out the jobs across the nodes.
what is the nature of the task? 3/16/2012 8:27:15 AM |
clalias All American 1580 Posts user info edit post |
Ok I'll read more about the "keypair auth + knownhosts". Is this solving the problem of having to login to multiple machines with different putty windows? The main annoyance here is having to have 7 putty windows open at a time to kick off the runs on different servers.
yes, we have no need for windows. the only reason that happened was our last IT guy was not linux savvy. This has created a headache for us.
So, i'll look for more info about gird. but a first search didn't turn up want I think you are talking about.
writing tools to batch out jobs is not a problem.
we currently have many scientific programs (fortran/c++/java in dev.).
The main one we run now takes about 2 min to run, but we have to run it tens of thousands of times to complete "one" run. So currently we batch out jobs across the servers using perl, php, or csh scripts. This particular code is not multithreaded by design, but there is some done by the compiler.
Another code we have takes hours to run and is written with openMP instructions.
I'm most concerned about the former case, as this is done day in and day out.
[Edited on March 16, 2012 at 9:56 AM. Reason : .] 3/16/2012 9:52:18 AM |
smoothcrim Universal Magnetic! 18966 Posts user info edit post |
if you are calling the same script on each machine, make a marshall script that ssh's to each machine with a keypair
ie:
ssh -i /user/home/key.pub user@hostname kickoff.sh then put a line in your marshall script for each node in the grid then you'll only need one terminal open3/16/2012 11:50:13 AM |
clalias All American 1580 Posts user info edit post |
Thanks. I like it. but we have need to do other commands like 'top and 'ps 'kill etc to monitor the processes on each server. any thoughts on an easy way to do that.
thanks 3/16/2012 12:26:09 PM |
smoothcrim Universal Magnetic! 18966 Posts user info edit post |
interactive monitoring and control is going to require something more significant than scripting, you should probably look into open source tooling 3/16/2012 12:57:02 PM |
clalias All American 1580 Posts user info edit post |
OK so I am spec'ing out the network but I have a quick question on server memory.
for the computing nodes I am using 6x Dell PE R410
What is the difference in the Quad and Dual Ranked RDIMMS? Which is better for my application.
All the following are the same price.:
48GB Memory (6x8GB), 1333MHz, Dual Ranked LV RDIMMs for 2 Procs 48GB Memory (6x8GB), 1066MHz, Quad Ranked RDIMMs for 2 Procs, Sparing add $0.00 48GB Memory (6x8GB), 1066MHz, Quad Ranked RDIMMs for 2 Procs add $0.00 3/20/2012 9:57:54 PM |
moron All American 34142 Posts user info edit post |
You can use a bunch of Macs and Xgrid... http://www.macresearch.org/the_xgrid_tutorials_part_i_xgrid_basics 3/20/2012 10:48:36 PM |
clalias All American 1580 Posts user info edit post |
cool idea, but that's not going to happen - too different from out current configuration. So any idea on the server memory? Also, what about the difference in 10Gbe and fibre, etc.. would I really notice a difference?
How about networked storage? http://www.dell.com/us/enterprise/p/network-storage-products?~ck=bt
If I get a Powervault MD1200 I need a PowerEdge R710 or something. What are the advantages of
Powervault MD3600f? or Powervault MD3200i iSCSI ?
Do I need SCSI? 3/21/2012 9:29:47 AM |
smoothcrim Universal Magnetic! 18966 Posts user info edit post |
none of these questions can be answered easily without some idea of the problems you are trying to solve and your budget. are you currently IO bound? memory bound? if so, memory bandwidth or capacity? cpu bound? if so, do you need more concurrent threads or higher frequency per thread? 3/21/2012 9:33:32 AM |
clalias All American 1580 Posts user info edit post |
Budget is somewhat flexible but right now I've figured around 80-90k. this includes: Rack 7 processing nodes 1 NFS 1 server for authentication and other process networked Storage array switch KVM UPS Tape back up unit
So, most of the work is batched out to one server then the results are moved to the shared drive when job complete. We need more higher freq per thread. Probably not bound in memory size 48G is plenty, but would like it to run faster. Somewhat CPU limited, I have noticed a big difference when running on older cpus. I think Intel® Xeon® E5620 2.4Ghz, 12M Cache,Turbo, HT, 1066MHz Max Mem [Included in Price] is probably fine, but might bump it up to L, don't know yet.
most of all I need to make sure that the storage array is redundant - tape back up is final option. Need the shared drive to be fail safe from a hard drive failure and total server failure. So does that mean we need two PE R710 (each raid 1 drives) that are hooked up to the Powervault with 14 hdd in Raid 6?
thanks 3/21/2012 10:22:00 AM |
cain All American 7450 Posts user info edit post |
Quote : | " 10Gbe and fibre, etc.. would I really notice a difference?" |
As you asking about the difference in using Ethernet vs Fibre Channel vs FCoE or just asking using the different cable medium (glass vs copper)3/21/2012 11:12:06 AM |
clalias All American 1580 Posts user info edit post |
The former. 3/21/2012 1:15:56 PM |
cain All American 7450 Posts user info edit post |
If you have specific questions regarding comparisons on the 3 protocols pm me. A pros/cons/general advantages on those is enough to fill a book. 3/21/2012 5:09:19 PM |
mellocj All American 1872 Posts user info edit post |
Quote : | "So, most of the work is batched out to one server then the results are moved to the shared drive when job complete. We need more higher freq per thread. Probably not bound in memory size 48G is plenty, but would like it to run faster. Somewhat CPU limited, I have noticed a big difference when running on older cpus. I think Intel® Xeon® E5620 2.4Ghz, 12M Cache,Turbo, HT, 1066MHz Max Mem [Included in Price]" |
if you're asking for hardware recommendations, I wouldn't get the E5620
your hardware specs are pretty vague ("noticed a big difference when running on older cpus") but, you'll get a lot more bang for the buck with running single-socket SandyBridge Xeon E3 systems, or waiting for the new E5 systems which should be available in the next few weeks. That is assuming you want "higher freq per thread". if you just want a lot of cores, amd usually wins.3/21/2012 7:12:45 PM |
smoothcrim Universal Magnetic! 18966 Posts user info edit post |
what does your IO load look like? are you processing huge amounts of data from disk with something like hadoop? in memory database? simulations? depending on how much or how little data you have, that makes a big difference. a fusion IO card per box might make a huge difference if you are constantly paging data in and out of memory.
as far as interconnects, iscsi over 10gbE is the way to go. fiber is garbage and don't waste your time with it, fcoe is even worse. nfs if block based doesn't work. 3/21/2012 9:48:23 PM |
BIGcementpon Status Name 11318 Posts user info edit post |
Here's what you need: http://helmer.sfe.se/ 3/26/2012 12:48:19 PM |
CaelNCSU All American 7080 Posts user info edit post |
Virtualized?
http://www.xen.org/ 3/26/2012 5:05:16 PM |
tk New Recruit 33 Posts user info edit post |
Quote : | " -A true linux cluster (perhaps). Basically one point of entry to the cluster servers where we will have access to all of the processing capability from one interface. Avoid having to log into to multiple servers to run our code." |
It should be noted that this is currently a limited functionality in even commercial clusters. Even using Sun Cluster or Veritas Cluster still requires you manage multiple servers. While "cluster commands" can execute system changes across the cluster, those are still limited to "cluster functionality," which is generally heartbeat, shared storage control, quorum settings, or something like that.
I guess the point is that you should really look at what "clustering" does to see if that's what you're really aiming for. Actively handling webserver load across hardware is going to be done in the software regardless of whether you cluster at the OS level or not, for instance. OS-level clustering is more of a storage requirement thing. If all those pieces need to use the same glob of storage, then you need to be able to control file-locking et al between multiple servers using the same sandbox to write crap to, thus quorum and scsi flags matter. That's what "clustering" at the OS level really gets you... the ability to have a bunch of servers using the same data without stepping on each other's toes.
Now, if you're talking HA, or even DR over WAN, then there's a whole other conversation we can have about how OS clustering is useful.. but that doesn't seem to be where you're trying to get to.4/2/2012 3:19:03 AM |