Jump to content
Compatible Support Forums
Sign in to follow this  
zenarcher

SATA RAID With Mandrake 10.1

Recommended Posts

I seem to have created quite a challenge for myself, being new to Linux and completely unfamiliar with RAID, at the same time. I really need some help here. I have installed a pair of SATA hard drives in my system (WD800JD) 80 Gig 7200 RPM. My motherboard is an MSI KT6V, using the VT8237 for RAID. I have 512M of RAM. I read on another forum that many people were having a problem, as that specific WD hard drive could not be recognized by the VT8237, I seem to be fortunate, in this respect, as both were immediately recognized and properly identified. I was able to set them up just fine. I want to set them up as a RAID0 array. After doing so in BIOS, I attempted to install Mandrake Linux Powerpack 10.1. This would not work, but I found that I must disable APIC in the BIOS. After doing so, all seemed well.

 

I then booted the computer with the first Mandrake disk, to install. Immediately, Mandrake installed the drivers for the VT8237. The install process went along just fine...everything appeared normal. At completion of the install, Mandrake advised me to remove all disks and reboot, which I did. The system went through the BIOS fine..but when Mandrake goes to start, rather than starting, I merely see about half a screen full of the number "9". Nothing more.

 

I then went back and deleted the array and created a RAID1 array in the BIOS. I reinstalled Mandrake, VT8237 drivers installed fine, as did the disks. At the end of the install, I removed all disks and rebooted. Mandrake works fine! No "9's" on the screen. Apparently, it works fine as a RAID1 array, as the problem is only with the RAID0 array.

 

Looking at the partitions when in RAID1 array, the first disk shows all the partitions. If I move to the second tab, the second drive is there, but no partitions appear. So, not knowing anything about RAID, I'm wondering if it's actually a mirror, as it's supposed to be in RAID1.

 

Anyway, I'm totally confused. I am a Linux noobe and even less skilled in RAID. This is my first attempt with RAID. I really would like to figure this out and use a RAID0 array.

 

As I say, in the BIOS and RAID setup, both hard drives appear...are named and serial numbers, size, etc. all are correct, so I believe they are working okay.

 

Any help would really be appreciated!

Share this post


Link to post

gidday zenarcher

 

I'm not really much of a guru when it comes to onboard-SATA-RAID issues, as I only get in contact with them good old DPT-SCSI RAID boards or dedicated SATA controller cards, of which we have a good number in the machines here. But anyway ...

 

The infamous "9"s ...

This usually indicates that the linux-loader (lilo or grub) isn't happy with the partition type/filesystem it wants to read its startup code from. In your case, the bootloader seems to considers the RAID0 stripe set as something "invalid".

 

Why does it work on a RAID-1 mirror-set then?

Basically, my guess is that your installation doesn't utilize SATA-RAID at all. Due to the specs of "mirroring" the disks are nevertheless available ... or at least one disk is. If the mirror set would be properly set up you wouldn't even see your second drive in a regular disk manager (though I admit that this could be possible with SATA-drives; dunno).

 

What to do now?

Foremost I strongly advise you to NOT install anything important onto a plain RAID-0 stripe set! RAID-0 is only a valid option if you also mirror the stripe set (RAID0+1). This, of course needs at least 4 disks (2 x for RAID0, 2 x to mirror the RAID0-set).

 

It's also a good idea to use "plain" boot partition with a standard file system on it (ext3 recommended). This boot part. doesn't need to be overly large, a handful of gigs are more than enough.

 

Let's say you have 2 x 80GB drives, then a partitioning as follows might be handy (all mentioned partitions are to be created on both drives) ...

 

partition 1: 1-2GB (will be stripe-swap-space later)

partition 2: 1-2GB (ext3, boot partition)

partition 3: the rest (will be the RAID0- set later; or RAID1)

(all partitions as "primary")

 

In your RAID controller BIOS, set up a RAID 0 set by combining both partitions #3. If you're the adventurous kind of person, you can also stitch together the parts. #1 as a RAID-0 swap-volume.

 

Leave the boot-partitions (on both drives) as they are, format those as EXT3 during the OS-setup, and have the OS-setup proggie install the boot-loader to partition 2 on drive 1 (the Mandrake setup-prog should ask you about this towards the end of the setup).

 

As mountpoints you use "swap" for the RAID-set created with both part. #1, and "/" is the mountpoint for the RAID0/RAID1 set that you created from both parts. #3.

 

This way the bootloader can read it's data from a "real", "non-logical" volume, while the system itself resides on whatever RAID-volume you created (keep in mind that RAID0 doubles the chances for complete and total data-annihilation)

 

With a setup like this, you should at least be able to avoid the "9"s. If the system will make use of the RAID set? Who knows smile

 

Hope that helps

Share this post


Link to post

Thanks for the info, blackpage. I'm still kind of lost, but I printed out the info you posted here. I'll try to give you a bit more information, as well.

 

When I go to look at the Mount Points - Partitions, here is what I see:

sda 74GB

Mount Point: Home

Device sda6

Type: Journalised FS: ext3

Size: 67GB (90%)

Formatted

Mounted

 

When I look at the second tab, I see:

 

sdb

Empty

Size: 74GB

Cylinder 0 to 9728

 

When I went into the Bios Utility to set up the RAID, I had a choice between RAID0 and RAID1. There may be other options, but I was not aware of them. When I chose RAID0, or RAID1, the array was created automatically. In this RAID1 I am now using, the second drive was shown as a mirror. In the RAID0, which I want, both were striping.

 

I know, but have never tried changing...that when installing Mandrake 10.1, the partitioning I believe, can be done custom or automatic. I have always just chosen automatic and let Mandrake create partitions, etc., as it chose to do. I do remember in the Mandrake install, there was some question asked where you could determine what partition to boot from....If I understood what I saw correctly. But, I'm not real sure.

 

If I understand your suggestions, I think the thing to do is to go back to the BIOS RAID utility....delete the RAID1 array and create a RAID0 array. Then, start the install of Mandrake 10.1. Then, I'm sort of lost on your instructions about using a plain poot partition with a standard file system on it. Not sure how I go about that. Also, not sure about the partitioning to mention...Partition 1,2 and 3...with the rest in RAID0..set later...and all the partitions as "primary."

Must I then go back into the Bios RAID utility and combine both partitions #3? I'm lost there.

 

I'm really sorry for my lack of understanding on this...so any further assistance would be appreciated.

 

Regards,

zenarcher

Share this post


Link to post

heya zenarcher,

 

first of all, I have to apologize for bits and parts of ym first post. I have written it after 16+hrs of hardcore programming, and obviously my first post lacks one or two key infos smile

 

Firstly: It sounds as if it was possible to combine logical partitions to RAID volumes with the RAID-controller-BIOS. The VIA chip we are talking about is most probably not capable of that. What I was talking about (and which I simply forgot to mention with at least a single word smile is a technique called "LVM", aka "Software RAID". But back to that later ...

 

The VIA southbridge-chip "VT8237" (as used on your mobo) is only the first part in the onboard-RAID chain. To be able to use this feature at all, a secondary "software-layer" (a driver) is needed.

 

Under Windows this is usually no big deal. Under Linux this driver-layer is a profound source for major setup problems. And if this wasn't bad enough already: Almost any onboard RAID chip (be it SIL, VIA or whatever) is error-prone under linux, but the VIA controllers are legendary for a fundamentally horrible linux-compatibility, and at that: the VT8237 is one of the worst. But no use in bashing a rotten onboard chip. Let's see what can be done now ...

 

Check the loaded modules

First thing to do in your current situation is to check if the VIA driver is loaded properly. Just because the setup proggie says "installed" doesn't mean it's actually "installed properly". Open a console (as root) and type ...

 

Code:
root@comp# [b]lsmod[/b]

This command produces a list of all loaded modules and also provides info about how the modules are utilized. In your case check for list entries that contain the string "sata_<something>", where <something> is the name of the sata-chip vendor. In your case the module to check for should be named "sata_via".

 

Result A: module is loaded

If you indeed have this module listed, do check if it is referenced/used by any other module. You can determine this by the number and names in the column "used by". If no module uses the via-sata-module, your onboard RAID setup is very close to uselessness.

 

If the module is used and you still see 2 seperate drives instead of one large RAID-1-volume, then something else went astray. Unfortunately I don't have a clue as to how to solve this then.

 

Result B: module is not loaded

If no sata_via thing is found in the lsmod output, then you can try to insert it. To do so, enter the command ...

 

Code:
root@comp# [b]modprobe sata_via[/b]

 

Two things can happen after this:

 

a) the computer hangs

To circumnavigate this it would be a good thing to boot the OS with the noapic-parameter and try the modprobe-command again

 

B) you get a lot of error messages

Which will either tell you that the module wasn't found or could not be loaded due to some freaky reason.

 

Whatever may come up, keep us informed so that we get a clearer picture about the status your puter's in now. There are still a few other things one could do, like installing a third drive and install the system onto this disk. With such a setup you could compile a custom kernel with the via support built in, which might enable you to add the main drives as RAID volumes later.

 

In the meantime you might want to inform yourself about this "LVM" thing. If you ask me, it would be the better option anyway, as it is just as performant as the onboard solution but much more stable and less error prone.

Share this post


Link to post

blackpage, I sincerely appreciate the help and understand that after 16 hours, no one is that alert. Many years ago, I worked in the news business and know what those hours can do to a person. Now, being in the "retirement" age group, I don't always catch the little things, even after a lot of sleep!:)

 

Here is the listing I got in the terminal...maybe if we take it one step at a time, I won't mess anything up. Anyway, I will let you take a look at this...and maybe you can tell where to go from here. Looks like the VIA is there to me...

 

[root@localhost zenarcher]# lsmod

Module Size Used by

nls_iso8859-1 3680 0

isofs 31352 0

vfat 11168 0

fat 39776 1 vfat

md5 3584 1

ipv6 230916 10

rfcomm 32348 0

l2cap 19876 5 rfcomm

bluetooth 39076 4 rfcomm,l2cap

snd-seq-oss 31232 0

snd-seq-midi-event 6080 1 snd-seq-oss

snd-seq 47440 4 snd-seq-oss,snd-seq-midi-event

snd-pcm-oss 49480 0

snd-mixer-oss 17376 1 snd-pcm-oss

snd-via82xx 22372 1

snd-ac97-codec 69392 1 snd-via82xx

snd-pcm 81800 2 snd-pcm-oss,snd-via82xx

snd-timer 20356 2 snd-seq,snd-pcm

snd-page-alloc 7400 2 snd-via82xx,snd-pcm

gameport 3328 1 snd-via82xx

snd-mpu401-uart 5856 1 snd-via82xx

snd-rawmidi 19300 1 snd-mpu401-uart

snd-seq-device 6344 3 snd-seq-oss,snd-seq,snd-rawmidi

snd 45988 13 snd-seq-oss,snd-seq,snd-pcm-oss,snd-mixer-oss,snd-via82xx,snd-ac97-codec,snd-pcm,snd-timer,snd-mpu401-uart,snd-rawmidi,snd-seq-device

soundcore 7008 1 snd

parport_pc 30976 1

lp 9548 2

parport 33896 2 parport_pc,lp

af_packet 16072 2

floppy 55088 0

via-rhine 17572 0

mii 4224 1 via-rhine

ide-cd 37280 0

cdrom 37724 1 ide-cd

usb-storage 65504 0

loop 12520 0

supermount 34804 1

via-agp 7360 1

agpgart 27752 2 via-agp

nvidia 4805012 12

ehci-hcd 26244 0

uhci-hcd 28752 0

usbcore 103172 5 usb-storage,ehci-hcd,uhci-hcd

ext3 120680 2

jbd 49080 1 ext3

sd_mod 19008 4

sata_via 4644 3

libata 38020 1 sata_via

scsi_mod 103404 3 usb-storage,sd_mod,libata

 

 

Looks like 1 sata_via....but then, I have everything set up as RAID1 in BIOS right now...if that would make a difference, since I can't start Mandrake, if I have it set up as a RAID0 array...that is when I get all the 0's on the screen, when Mandrake is supposed to boot up. If necessary, I do have a small IDE drive I could install to start things...but it's only a little 4Gig test drive I keep around, so can't install everything I use on it.

 

Will watch for your next comment.

zenarcher

 

P.S. On an additional note, here are a few other things I'm questioning. As I said, when I set up the RAID0 array in the RAID utility in BIOS, everything looked fine. When I began installing Mandrake, everything also looked fine...and if I recall correctly, I think I saw somewhere in that installation process that Mandrake was saying that I had something like 149.5 G of hard drive space...which seems right for the pair of SATA 80Gig hard drives. Further, Mandrake went through installing all six CD's and there appeared to be no errors in the install. After the install was complete, I removed the final CD, per the Mandrake box which came up on the screen and rebooted, per the button in that box. When the computer rebooted...I saw the BIOS info come up, indicating the two RAID drives in RAID0 array...then, as Mandrake was supposed to start...that is when I got the screen half full of the number 9. I'm wondering if that meant that Mandrake could not find the boot partition, or something like that? Especially, since it seemed to install all okay. When Mandrake was setting up, I let Mandrake automatically determine the partitions and such. Not knowing if this had to be done a particular way with the RAID0 config, I was afraid to try to be "custom." I just wanted to provide this info, in case I wasn't clear in my earlier posts.

Share this post


Link to post

'lo again,

 

I see from your lsmod output that the driver module is indeed loaded and also in use. Brings up the question why there are still 2 seperate drives available. Anyone? Hello? smile

 

Just out of interest: Do you still rememeber what drive/partition you had chosen to boot from after your first attempt to install MDK on the RAID-0 set?

 

For the moment I have to dig the net about further infos concenrning the VIA RAID chip. Til now all I have found is either VIA-bashing or LVM-related (the latter one is interesting though but not our current problem smile.

 

I'll be back

(and I'm allowed to say that as I'm Austrian too smile

Share this post


Link to post

Hi again,

 

Well, at least you were able to figure out what I have going from the lsmod. That's a step forward....remembering that it is in RAID1 array right now, as that's the only way I could install Mandrake and make it work.

 

I know there is/was a lot of VIA bashing, as with the Western Digital WD800JD SATA hard drives...and only that model, VIA8237 was not able to recognize the drives. In fact, WD has been replacing the drives with the 120Gig model, at no charge, since it's their problem and they can't fix it. But, mine were recognized with no problem. I can look at the serial numbers of the drives, create the array and everything, just fine. Also, from what I've read, Mandrake 10.1 is supposed to work fine with the VIA8237 for RAID.

 

I did not choose which drive/partition to boot from, but let Mandrake do that. As best as I can recall, it said it would boot from sda. That's the best I can remember...but I think that may be where part of the problem lies. Don't think it could find the boot partition.

 

Now, that I'm using a RAID1 array, I don't know why the second drive appears, but shows empty. I guess that means it isn't serving as a mirror, as I think it should in RAID1 configuration, if my understanding is correct.

 

I know a lot of people don't like RAID0, due to the risk of losing data in a crash. I always back up my important files in Home, so I'm not concerned about that. RAID1, to me would be find if you had real critical data files, which I do not. I'd like to be able to use the RAID0 to learn more about the speed of this configuration and increase my hard drive size, which RAID0 would be seeing both drives as a single size (i.e. the 148.5Gig that appears when installing Mandrake).

 

I'll watch for what you learn about this, as you understand way more than I do about this RAID. The help is appreciated!

 

And, maybe you can run for political office here, with that "I'll be back" line. It seems to have worked once!:)

Share this post


Link to post

Here is something I was just told on the Mandrake forum. I'm not sure about this...and I hate to wipe out everything again and find out it was not good info. Does anyone have an idea about this??

 

Start over, and don't use the onboard RAID. Use Linux software RAID. Cheap onboard RAID like that is not actually run on a dedicated hardware controller, it just steals CPU cycles like software RAID would, so it has no real performance benefits and tends to introduce compatibility headaches. Wipe the RAID configuration out, re-install MDK, and use the RAID configuration available in the installer to set up Linux software RAID. Good luck.

 

I am guessing that I would have to delete the RAID1 array I set up in the RAID BIOS utility, then at reboot, ignore the attempt to have me set up a RAID array there...Start a Mandrake install...and I guess Mandrake will give me some options for setting up a RAID array.....I wonder if I would let Mandrake set the partitions and such the way it wants, or if I would have to manually select? Any ideas appreciated!

 

zenarcher

Share this post


Link to post

greetings zenarcher

 

The short version first:

1: Status of your current setup

Yuppers, if you would switch from your current RAID setup to Software RAID, you would have to delete the RAID set in the controller-BIOS (results in complete data loss)

 

2: MDK tools for RAID setup

I don't know what RAID setup utilites come with MDK these days, but you can take it as granted that the kernel comes with SW-RAID support.

 

As it goes for the relation between drives/partitions, logical volumes and all that ... (lean back, relax, this is going to be lengthy smile

 

What you have found on the MDK forum is about what I was talking about in my first post ("LVM"/"Software RAID").

 

In fact, "onboard hardware RAID" - as postulated by the motherboard manufacturers - is in no way a hardware solution. It's a marketing buzz-word, and indeed: you'd probably be better off with a genuine linux software-RAID solution.

 

The linux SW-RAID-technique is what you might know from Windows as "Dynamic Volumes". Compared to the cheapo-onboard solutions SW-RAID has some major advantages. The most interesting one is that you can use partitions for your RAID-volumes instead of whole drives. Time for some rotten ASCII-art, I say smile

 

HARDWARE RAID WITH ONBOARD CONTROLLERS

Let's assume you have 2 disks (D1 and D2), and both disks are attached to an onboard controller (CTRL), like your VIA thing. The device chain would be as follows:

 

Code:
[b][D1][/b][color=#e5e5e5]····[/color]\[color=#e5e5e5]····[/color][CTRL]--->[DRIVER]--->[OPERATING SYSTEM][color=#e5e5e5]····[/color]/[b][D2][/b]

 

In this setup the CTRL is only used to have something to attach the disk-cables to. The main work is done by the software layer "DRIVER". This driver translates all requests from the operating system to the controller and vice versa.

 

The result is (or should be) a "logical volume" which size depends on the RAID-type (D1-size + D2-size for a RAID-0, or the size of a single (or the smaller) disk for a RAID-1 setup). Also, "onboard RAID" has the limit that can only stitch together complete and total physical units (= drives).

 

The DRIVER-layer that is mentioned above is where your current problems originate from. An example: In your current RAID-1 config the disks are still accessible as 2 seperate drives. This tells us 2 things: a) the DRIVER-layer seems to not work properly and B) the VIA "RAID"-controller is not too much different from any average non-RAID SATA-controller as the disks as still available, even though the driver is not working.

 

So basically, whatever you create in your controller setup-BIOS: it's all just some structures and stuff the DRIVER-layer is supposed to use to fake the impression that there is indeed some RAID-cluster in the background. The main conclusion we can draw from that is that what is called "onboard hardware RAID" is indeed just a "software RAID" as all the RAID-related work is accomplished by a driver anyway.

 

SOFWARE RAID

If onboard RAID is "software" too, the question arises why not use genuine Software RAID instead?. The principle of linux-SW RAID is similar to HW-RAID, with some major differences though. The most important is that you can use partitions (= logical units) instead of whole drives. For the upcoming ASCII-art, let's assume we have two 40GB drives ...

 

Code:
DISKS 01 : [||||||||||||||||||||||||||||||||||||||||]SIZE: 0--------10--------20--------30--------40 02 : [||||||||||||||||||||||||||||||||||||||||]

 

With a partitioning utility you could create 2 primary partitions on each drive. Partition 1 (P1, size: 2GB) would be the swap space ("S") and partition 2 (P2, size: 38GB) would be the data partition ("D") that will hold the operating system and the user files. This results in a disk layout as follows ...

 

Code:
DISKS[color=#e5e5e5]·····[/color]|[color=red][b]P1[/b][/color][color=green]<---------------- [b]P2[/b] ---------------->[/color]| 01 : [[color=red]SS[/color][color=green]DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD[/color]] 02 : [[color=red]SS[/color][color=green]DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD[/color]]SIZE: 0--------10--------20--------30--------40

 

With the Linux software RAID-tools you could now build "logical volumes" from out of the 4 partitions we have. Let's assume we want to build 2 RAID sets as follows ...

 

Code:
[color=red][b]D1[/b][/color][color=#e5e5e5]··[/color][color=red][b]P1[/b][/color]---> [color=#7c4183]<[b]log.vol. 1, "SWAP", RAID-0[/b]>[/color][color=#e5e5e5]··[/color]| [color=red][b]P2[/b][/color][color=#e5e5e5]··[/color]+ [color=#e5e5e5]·[/color]+-> [color=#284a9b]<[b]log.vol. 2, "DATA", RAID-1[/b]>[/color][color=#e5e5e5]··[/color]| [color=green][b]P2[/b][/color][color=#e5e5e5]··[/color][color=green][b]P1[/b][/color][color=green][b]D2[/b][/color]

 

As you can see: not only can you "glue" together partitions quite freely (as long as the sizes match), you can also vary the RAID-type of the resulting logical volumes. Besides that, the biggest advantage of linux SW-RAID is that is embedded seamlessly within the linux-kernel and therefore quite reliable.

 

You can, of course, create as much partitions on each drive as you like and build handful of logical volumes with these partitions. E.g. could you create some more partitions on the drives in the above example and combine those to an error-tolerant RAID-5 volume while the RAID-0 and RAID-1 volumes stay fully intact and unaffected.

 

In terms of performance a software RAID solution is in no way slower than an onboard-RAID setup. Given that, all the poster you quoted said is true.

 

Finally, one question is to be answered too. "If Linux SW-RAID is soo groovy. Why don't we all use it?". Well, because it's not super-easy to set up. A brief overview about the complexity that awaits you can be found on this website

 

I hope this clears a few things about "partitions" and "software RAID" as I have mentioned it in my first post.

 

good luck!

 

p.s.: Nope, I'm not having too much time, but we have a day off over here and so I thought it'd be just grand to abuse the Esselbach forum software as Desktop Publishing system:)

Share this post


Link to post

blackpage, you should write Linux books for dummies.:) Your explanation and ASCII art are fantastic! I've worked with M$ since DOS 5.0, with 286 processors and "huge" 40 MEG hard drives....but Linux has often left me feeling like an idiot. I live in a small town and if you want a computer fixed so it actually works, it ends up at my house. In the old, old days, I actually was involved in the mechanical design effor of the old 5.25 inch floppy drive. So, I'm not new to computers, but lost, lost, lost with Linux and RAID.

 

Your explanation is actually clear and I understand what you are saying. I'm really glad you could confirm what was said on the Mandrake forum, as well.

 

Now, I understand much more. I can merely go into the BIOS utility and delete the RAID array I have now. Actually, I have a small IDE drive with Mandrake installed, so if I really mess up, I can hook it up and ask for help...or risk using my wife's computer.:)

 

I was feeling confident about all this, until I went to the website you suggested. Then, I felt dumb again. Looking at RAID0 there, it talks about setting up the "/etc/raidtab" file, then running "mkraid /dev/mdo" to initialize. These instructions, along with their talking about downloading RAID utilities, loses me again, I guess. Is this all something I have to do, or will Mandrake take care of this? I see they are primarily talking about the kernel 2.4, which of course is not current with Mandrake 10.1. I don't understand if they are talking about editing files, creating files, or what..let alone downloading something. If you could translate their information, as well as you explained software/hardware RAID, I think I could manage!

 

Have a good day off!

zenarcher

Share this post


Link to post

Doing some further reading and researching, I find that Mandrake 10.1 has a RAID tool called mdadm included in the package. This looks to be a way to work with RAID arrays, from creation to monitoring them. It seems to me, however, that this tool could not be used until after Mandrake was installed on the system and would have to successfully reboot after install. Maybe someone can clarify a bit on the value of mdadm for me, as well. I'm still trying to find out if I need to do anything specific, in the way of configuring partitions, while doing the Mandrake install, using the Software RAID.

 

Thanks,

zenarcher

Share this post


Link to post

heya zenarcher

 

Ad MDADM: This package is the "frontend" for the linux software-RAID driver "md". The latter one is part of the kernel ever since kernel version 2.6.x so you should be able to use MDADM on your Mandrake box and configure some sweet software-RAID clusters with it.

 

Here's a nice page on mdadm, its invocation from commandline and its options ...

 

Link:

MDADM explained in easy terms and depth

 

I think the best news is that this tool seems to have replaced the "raidtools" and setting up raid-drives is now supposed to be a whole lot easier, using mdadm instead.

 

If you take a look at the above site you will notice that you can easily test the created RAID arrays as mdadm lets yu start and stop the arrays from the shell too. In your case a procedure as follows is required ...

 

Note: Before we go any further ... it would strongly advisable to have some sort of "Live-CD" at hand that would allow you to boot your machine from in case something goes wrong. With a live CD you could still edit the various config files and reset your Mandrake to a usable state.

 

1. Resetting disk-config in the VIA-controller

Even though you already have setup your box nicely and have Mandrake up and running, I'd suggest you start from scratch again, beginning with the deletion of the raid array in your controller BIOS.

 

TIP: You can, of course, try to boot the machine after you have deleted the RAID-setup in your VIA-controller proggie. As it doesn't work properly anyway, chances are good that Mandrake is still there and alive after this step which would save you at least the time of re-installing the OS again.

 

2. Partition your drives

If the system still boots, you will have to re-partition your drives so that you can play around with software RAID a bit. If the system won't boot ... well, in that case you'll have to start all over again and do the partitioning from within the Mandrake installer.

 

2.1 Partition layout

In all cases I'd recommend a "failsafe"-partitioning. That means to start out with a pretty basic disk-setup that Mandrake can surely use. You can launch and tweak software RAID later on. Using whatever partitioning tool is available create something like that (assuming your both SATA drives are "sda" and "sdb") ...

 

Code:
Part/Disk 1:[b]NR SIZE TYPE[color=#e5e5e5].......[/color]DEVICE[color=#e5e5e5].....[/color] FILESYS. MOUNT POINT[/b]01 1 GB primary .. /dev/sda0 . Swap ... swap02 9 GB primary .. /dev/sda1 . Ext3 ... /03 70GB primary .. /dev/sda2 . Ext3 ... none (for now)Part/Disk 2:[b]NR SIZE TYPE[color=#e5e5e5].......[/color]DEVICE[color=#e5e5e5].....[/color] FILESYS. MOUNT POINT[/b]01 1 GB primary .. /dev/sdb0 . Swap ... none (for now)02 9 GB primary .. /dev/sdb1 . Ext3 ... none (for now)03 70GB primary .. /dev/sdb2 . Ext3 ... none (for now)

Note: we will not use /dev/sdb1 for a RAID volume. It's gonna be a backup-storage space for the upcoming procedures.

 

3. Installing the OS

After you have created the partition structures as laid out above, install Madrake onto the small 9GB partition, named "sda1". Also, keep in mind to install the boot-loader into the MBR of a disk and not at the start of any partition (the installer should use the MBR anyway).

 

After this step you have the OS with all the fancy folders and files installed on "sda1", and the system should boot easily as no RAID is involved yet.

 

4. Creating the RAID volume

Boot into your Mandrake system and create some software-RAID volumes, using the mdadm utility. In fact you only have to create one volume ...

 

Code:
RAID volume to create[b]NR USING PARTITIONS[color=#e5e5e5]......[/color]SIZE[color=#e5e5e5]..[/color]RAID TYPE[color=#e5e5e5]..[/color]MOUNT POINT[/b]01 /dev/sda2 + /dev/sdb2 140GB RAID-0 ... none (for now)

 

The command to issue to create the RAID device would be ...

 

Code:
# mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sda2 /dev/sdb2

 

If there are no errors reported you can continue to step 4.1

 

4.1 Storing RAID info in mdadm.conf

As pointed out on the above mentioned page you should utilize a configuration file named "mdadm.conf" which is located either in "/etc" or "/etc/mdadm". You will have to check where it actually is after you have installed the MDADM-package. In case there is none, check if there's at least a folder called "mdadm" under "/etc". If so, create the file there (you can make symbolic link to that file under /etc later).

 

In the next step, open a suitable text-editor (kate or whatever floats your boat) and add the following text ..

 

Code:
DEVICE /dev/sda2 /dev/sdb2

 

This starts a "RAID device description" for mdadm and all you need is to add the specs for the drive-array. This is accomplished by opening a console and issuing the command ...

 

Code:
# mdadm --detail --scan

 

Copy the line beginning with "ARRAY ..." and add it as second line in the text-file. After that, save the file under "/etc/mdadm/mdadm.conf". Just to be on the safe side, create a sym-link to this conf-file unter "/etc" by running the command ...

 

Code:
# ln -s /etc/mdadm/mdadm.conf /etc/mdadm.conf

 

4.2 Starting the RAID device

At this point you have created a RAID-device and stored info about it in a config file. All you need to do now, is to start the array with the following command ...

 

Code:
# mdadm -As /dev/md0

 

5. Integrating the RAID device into your system

If all goes well up to here, you can begin to make your RAID-volume availbale at boot time. Unfortunately I'm not absolutely sure as to what is necessary for this. It could be that a simple entry to the "/etc/fstab" file is sufficient (and you need to add an entry there anway). And it could also be that you need to add startup scripts in the "/etc/rc.d" or "/etc/init.d" directories.

 

5.1 Mounting the RAID-device

For the time being let's start out with creating the mountpoint and the entry for "/etc/fstab". Open the file and add this line ...

 

Code:
/dev/md0 /raid ext3 defaults 0 0

 

While editing the fstab-file, add another entry for the backup partition which will be used later ...

 

Code:
/dev/sdb1 /save ext3 defaults 0 0

 

... and save it. In the next step create the folders to mount the raid volume and the backup-volume to. Open the console again and launch the commands ...

 

Code:
# mkdir /raid /save

 

At this point you should be able to "mount" your backup-partition and your RAID array with the commands "mount /dev/sdb1" and "mount /dev/md0". If you get errors about the drive not being formatted properly, it could be necessary to re-format the device. Use whatever tool Mandrake offers for that task or issue the command "mkfs.ext3 /dev/md0" in the console and re-mount the array with the above mentioned "mount"-command.

 

5.2 Testing the RAID-device

Now is the time to start the first tests with the new RAID-device. As your first go, copy somthing over to the RAID volume and see if that works out ok ...

 

Code:
# cp -R /usr /raid

 

This will make a copy of /usr on the RAID array. After the files are copied, do something very "Microsoft-ish": reboot your box to check if the RAID is available already when the system boots (due to the entry in /etc/fstab).

 

If you can access all the files and folders under /raid/usr properly, you can begin to copy over all the folders from your root-drive to the RAID volume.

 

If all folders are copied, you can begin to populate the RAID-volume with the files 'n folders from the root partition. Do this folder-by-folder and create a symlink to the new location after each folder has successfully been saved and moved.

 

IMPORTANT: Do not copy or move folders that represent special filesystems (e.g. "/proc", "/dev", "/initrd" or "/mnt").

 

It's perfectly ok if you stick to the following folders:

/bin, /etc, /home, /lib, /opt, /root, /sbin, /tmp, /usr/ and /var

 

These represent all performance-critical sections of the OS anyway that would benefit from being placed on a fast RAID-0 array.

 

As an example, the command sequence for the folder /usr would look like ...

 

Code:
# cp -R /usr /save# mv /usr /raid# ln -s /raid/usr /usr

 

What this procedure does is in brief words: The "cp"-line copies the /usr-folder to your backup partition "sdb2" (remember the entry in fstab). The "mv"-cmd moves the /usr-folder to the RAID volumen, thus "making space" for the sym-link, which is then created with the "ln- s"-command. Simple as that.

 

After all the copying you should have a system that still boots from an unproblematic SATA drive, but all the system and user files are stored on the fast RAID-0 volume.

 

 

6. Bad things that might happen

Possible problems in the above procedure might arise when the system doesn't recognize and start the RAID-volume at boot-time. In such a case the aforementioned addition of a RAID-start-script to an "/etc/init.d/rcX.d"-folder or the file /etc/inittab" might be necessary.

 

I don't want to discourage you, but in the case the RAID-volume fails to initialize at boot-time via the fstab-entry things could get a bit complex (determining WHAT rcX.d-folder is the right one to store a startup script into etc.).

 

So if you run into problems, take a break at that point and keep us informed. In a fortnight or so, my new workstation should arrive anyway which would give me the opportunity to investigate the Software-RAID on a practical level with a quick MDK-test installation that I could then document well with screenshots and all that.

 

6.1 Restoring an operational status-quo

If you stumble into problems, this solution - combined with a Live-CD - will allow you to restore your old system again in almost no time. To do so, boot the Live-CD, mount your regular root-partition (the non-RAID thingie), delete the symlinks to the folders on the RAID-volume and move the saved folders back to "/".

 

7. "Homework"

If you succeed wih the procedure, you can do likewise with the 2 swap partitions by combining them to a single and fast RAID-0 array. If you need to re-format, use "mkswap /dev/sdb0" and run the "mdadm"-procedure again. Don't forget to alter the entry for the swap partition in "/etc/fstab".

 

I hope this lengthy sermon will lead you to a usable RAID system. I truly would not want you to have to use your wife's puter. We all know what that means: desktop backgrounds, dynamically loaded every 2 minutes from the internet, showing the bubbly behinds of male models, pink or mauve window title-bars and "Shelly Allegro" at "18pt/italic" as menu font. That - especially in combination with the usual amount of fluffy toys placed on top and at least a dozen of screaming yellow "post-it" notes all around the monitor - is more than a man could possibly handle :))

 

Hope that helps

Share this post


Link to post

Good Morning blackpage. Thanks so much for this detailed information! It is sincerely appreciated! While awaiting your reply, I've been reading on the mdadm utility,so I think I am getting a little bit familiar with it and it does appear to be somewhat understandable, even to me.

 

Your explanation and instructions here are the best I've ever found, anywhere I've looked! If this works, you're going to have to put out a book on Mandrake RAID! I see that several people have the same question on the Mandrake forums and others and are met with "I don't know."

 

I know what I'm going to be trying here today. I'll get my wife off to work, then begin trying your instructions. I'm pretty patient, as far as doing re-installs, so if it fails, I don't mind trying a few times. I'd be thrilled to get it working. I have been printing everything out, as I know if I get this working, my wife will want the same on her system, even though she doesn't know what it is.:) Also, later this year, I plan to rebuild her system and mine (she always has to have the same identical system as mine, or she is suspicious that she has a second rate computer. In that rebuild, I'm planning to go to a new motherboard and 64 bit AMD processor. I believe the new MSI motherboard I'm looking at will support either 4 or 6 SATA drives. I'll probably move up to a full sized server case, so I have more working room, as I also want to eventually install liquid cooling. It's just my hobby.

 

Yes, the wife's computer is not a place to be. My wife is a cosmetologist and doesn't want explanations about how it works. If there is a problem, I just find a yellow sticky note on my monitor with detailed instructions, such as "Fix it.":) I switched her over to Linux before and she freaked! She couldn't possibly use Firefox....it HAD to be Internet Explorer! So, back to Windows XP. Then, switched her to Firefox...until she got comfortable with that. Then, over to Star Office....instead of MS Office 2003..she liked it. THEN, over to Linux, using Star Office and Firefox! That worked! NOW she understands Linux!:) She misses a couple of little MS games, so I'm patching an old computer together, with Windows XP and the couple of games, as she's pretty happy with the Linux games. That's her world of computers...games and websurfing. I don't even think I'll put that one on the network...I get tired of removing spyware and such.

 

Anyway, have a great day there and I will have a busy day ahead here. Thanks so much for all this info and the time you've spent helping me. I'll be back later today with the result of how this went. If I have problems, I can always go back to this configuration, until we can sort it out, or I have an old 4 Gig IDE hard drive with my basic Mandrake setup already installed. I can just hook it up and leave it lying on the desk!

 

Regards,

zenarcher

Email: zenarcher@cableone.net

Share this post


Link to post

Sometimes, I think I'm too dumb to have a computer!:) Anyway, I have been following the instructions from blackpage. However, I have run into a bit of a problem here. Here is what I've done so far:

 

1) I have removed all the hardware RAID, which worked just fine.

2) I have removed all partitions of existing Mandrake install and started anew.

3) I partitioned Disk 1 and Disk 2, exactly to the size and descriptions shown in the 2.1 Partition layout. All went just fine.

4) I have installed Mandrake 10.1 PowerPack on the 9 GB partition of Disk 1 (sda1)

5) The bootloader installed to /dev/sda which I believe is correct, rather than to sda1 or one of the partitions.

6) I have removed install disks and rebooted the system, which came right up, however, I have no RAID installed as of yet, so am now ready to go for the RAID install.

7) Incidentally, mdadm did install a configuration file called mdadm.conf in /etc/mdadm, so I believe that is correct.

 

Now, here is my problem. I am a bit confused in Step 4. "Boot into your Mandrake system and create some software-RAIF volumes, using the mdadm utility. In fact, you only have to create one volume. Following that instruction, there is some information, but not sure what I'm supposed to do here. Following that, is the command ti issue to create the RAID device shown as:

# mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sdb2

 

If I run that command right now, I get the following error:

You haven't given enough devices(real or missing)to create this array. Looking at the mdadam website, I think I have to specify sda and sdb, but I'm not sure.

 

So, if anyone can help, I need to know if I should be trying this command, or if I should be creating some software-RAID volumes first and if so, how??

 

Thanks,

zenarcher

Share this post


Link to post

I seem to have messed something up here...and just did another fresh install. I removed all partitions, etc. and once again configured the partitions on each of the two drives. Still stuck at the same place I was in the previous post.

 

Here is something I just discovered, which may explain some of the previous post situation.

 

The device names are not the same as what was explained, when I was supposed to set them up. Here is what they are:

 

Disk 1

1 GB primary /dev/sda1 Swap swap

9 GB primary /dev/sda5 Ext3 /

67 GB primary /dev/sda6 Ext3 none

 

Disk 2

1 GB primary /dev/sdb1 Swap none

9 GB primary /dev/sdb5 Ext3 none

67 GB primary /dev/sdb6 Ext3 none

 

Maybe that info will help some...

 

I could not figure how to change them to sda0, sda1, sda2.

 

Thanks,

zenarcher

Share this post


Link to post

heya zenarcher,

 

little thing, big trouble smile

 

The partition numbering scheme in linux depends on what part-type a respective partition is. 1st Primary Partition on drive SATA-0 is called "sda0", the second one is "sda1" and so on ... so I assumed at least. With SATA all seems to be a bit different, and so the 2nd and 3rd part. can be numbered differently (didn't dig into that too much as I consider SATA a loss of an approved standard and therefore pain in the you know where smile.

 

Or maybe the funky numbers come from the VIA controller, I have no idea, but the good thing is: it's no big deal, keep your HDD-setup as it is with the partitioning scheme as layed out.

 

Disk 1

1 GB primary /dev/sda1 Swap swap

9 GB primary /dev/sda5 Ext3 /

67 GB primary /dev/sda6 Ext3 none

 

Disk 2

1 GB primary /dev/sdb1 Swap none

9 GB primary /dev/sdb5 Ext3 none

67 GB primary /dev/sdb6 Ext3 none

 

If Mandrake thinks it's got to be sda/b5 and 6 then just let it smile As long as the partition type is set to primary (which it is) we're cool.

 

About MDADM error msg

Well, as said above: You will have to obey the will of the mighty operating system here. That means to the explanation in my last post:

 

"sda0" becomes "sdb1", "sda1" becomes "sda5" and "sda2" becomes "sda6" on your puter. Same goes for the "sdb"-partitions.

 

The mdadm-command to create the RAID array would therefore be:

Code:
# mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sda6 /dev/sdb6

 

According the this command that produced an error in your case ...

 

# mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sdb2

[code]

 

Error Msg: "You haven't given enough devices (real or missing) to create this array."

 

That message is true, you specified that the array consists of 2 devices ("--raid-devices=2") but only specified one, and even that was invalid due to MDKs SATA-partition numbering scheme.

 

So as I said, it's no big ting. Just use the partition numbers Mandrake has assigned instead of the ones I used in my last post. Those were meant more "logically" (to have any kind of numbering) than real-world numbers.

 

Keep up the good work, it looks quite good so far and

 

keep us informed

Share this post


Link to post

Good Morning and thanks again, blackpage! I thought this was the case, but with my limited knowledge of Linux, I have learned to assume nothing.

 

One more question, since I'm not assuming anything, before I continue here. In your section 4, Creating a RAID volume, you say, "Boot into your Mandrake system and create some software-RAID volumes, using the mdadm utility. In fact, you only have to create one volume." My question is this were I do the # mdadm --create --verbose /dev/md0 --level+0 --raid-devices=2 /dev/sdb5 or is there something I need to do first?

 

I have to say, it was a good thing to do 4 complete clean installs yesterday. This is the best install I've ever had! Everything seems to be working perfectly! While I've been waiting for your reply, I have gone ahead and done all Mandrake updates, installed mdadm and tweaked a lot of my configurations. It's the best Mandrake install I've had yet. I'm getting experience at that, as well!

 

I would recommend, for anyone attempting this SATA RAID proecdure, that you do not run out of beer, nor should this be attempted on coffee or tea, alone! This is a beer project and running out at a crucial time could be devastating!:)

 

Will watch for your reply, blackpage...and again, thank you so much for the help!

 

Regards,

zenarcher

Share this post


Link to post

gidday zenarcher

 

You can create the RAID array any time. That is: create them once the OS is installed properly and boots flawlessly, which seems to be the case now. Just boot into Mandrake and combine the partitions sda6 and sdb6. It should be no prob anymore now.

 

have a good day

Share this post


Link to post

Thanks so much, blackpage. I have had a busy day here, today. Two computers showed up, both with Microsoft and a ton of Trojans. One had finally quit even starting up. Been working on them and have the wife off work today, as well. I'm going to take a shot at creating the RAID array tomorrow and really hope I can get through it, as right now, this install is the best I've ever had! Everything is working perfectly! I'll keep you posted.

 

Regards,

zenarcher

Share this post


Link to post

The Linux Box...she died! I know blackpage was waiting to hear how the SATA RAID went. I made some notes, so I hope that will help figuring out a solution.

 

Everything was going fine....in fact, I did create the RAID array and even had a difficult time getting rid of it when I went to remove all partitions to begin a new install.

 

Everything went fine until Step 5.1 At that point, I did have to use the command mkfs.ext /dev/md0 and also mkfs.ext /dev/sdb5 After doing so, I was able to run the mount command just fine. No errors. So, again, I thought I was well on my way.

 

I then tested the RAID device and all was well. I even did the reboot and all was fine. No problem....Mandrake came right up as it should.

 

Now, for the real problem area! I ran the # cp -R /usr /save then the # mv /usr /raid which went fine...and finally the #ln -s /raid/usr /usr There were no errors during this process.

 

I then attempted to run the same command for /bin....munning the above commands but replacing /usr with /bin When I did, I got the error "No such file or directory found when I tried to run the mv command. I then tried to run the same commands on the /home with the same error statement.

 

I then closed the terminal and tried to re-open the terminal, which I could not do. Just got a white screen and no prompt.

 

I rebooted the computer...and when Mandrake attempted to start, I got a Logon command. I attempted to logon, but would not accept the logon.

 

I am thinking this probably has something to do with the script I did add the script I was told to add in /etc/fstab but blackpage had a feeling a script might be needed in /etc/rd.d or /etc/init.d and I am thinking that might be the problem.

 

As I say, the RAID array was created just fine. No problem was apparent there.

 

Anyway, I had to remove all partitions and reinstall Mandrake Powerpack 10.1 I am now back running as normal.

 

I won't worry about it, as I just ordered Mandriva 2005 PowerPack, the 6 disk set, which should be here the end of this week. I will clear all of this install out and install the new Mandriva 2005, which will give blackpage a bit of time to maybe think of a solution. Also, I have a couple of Windows computers to build for customers, so I'll stay out of trouble with Linux!

 

Anyway, blackpage, I wanted to bring you up to date. I'm getting real fast at installing Mandrake Linux...and even pretty good at going through your instructions for creating the RAID array!:) Now, if we can just resolve the issue of what happened right here at the end, I think it may work. I do think we're getting close! Maybe I'm just an optimist!

 

Regards,

zenarcher

Share this post


Link to post

heya zenarcher

 

curses curses, I have to say. I didn't think that could possibly be such side effects that might happen when you "mv" folders like "/usr/bin", "/bin" and "/sbin".

 

As far as I can see, you actually did create the symbolic reference "/usr" which points to "/raid/usr/". Given that I'm a bit clueless as to why MDK couldn't find the "mv"-command (which is normally under "/usr/bin"). A possiblility could be that MDK doesn't "follow" symbolic links to special file systems like RAID-volumes for system-critical folders (just like e.g. the Apache webserver doesn't like sym-links to its web-root-directory on FAT32-volumes).

 

A possible solution could be to add the new locations to the path before beginning the move-procedure. Open a console and verify your current paths this way ...

 

Code:
user@box# $PATH

 

This will show all current search paths. All you have to do is to add the new locations to the path at of start of this string. This can be accomplished by a command like this ...

 

Code:
user@box# export PATH="/raid/bin:/raid/usr/bin:/raid/usr/local/bin:/raid/sbin:/raid/usr/sbin:$PATH"

 

It's important that the new paths are added at the beginning of the string, as this ensures that Mandrake will first look in these new locations when it searches for the commands you need.

 

Please, keep in mind, that I'm guessing at this point, and I truly hate it to have no box around at the mo to verify my babbling (all my spare machines are currently configured as render-cluster for some huge 3D-Blender project). But as I said, as soon as the new "baby" arrives (Athlon DualCore, yay! :)) I will give it a good go with SW-RAID.

 

As it goes for the rcX.d-scripts. We ain't that far yet smile So it's possible that some more problems are lurking there, waiting to just "make your day" smile

 

good luck

Share this post


Link to post

Good Morning blackpage. Well, I'm considering it all a learning experience!:) Obviously, my wife thinks I'm crazy. As I say, I have a couple of computers to build this week and will be waiting for the new Mandriva 2005 PowerPack to arrive, so I can start fresh and install it. I will set up the partitions and all, per the instructions you sent me. I'll be ready to go, once you have your new box there and can experiment a bit and maybe figure out the problem.

 

I do think everything went fine, up to the mv point. As I say, I could see the RAID drive in the partitions and drives section and the RAID drive with the correct size and all.

 

I did not get any errors when making the mv for /usr, that seemed to go fine, of course, no way to know for me, whether Mandrake was able to find /usr after that. I suspect not, as the errors occurred when I tried to perform the mv command for /bin, /home and anything else, since /usr was the first I did. At that point, all was lost....Mandrake couldn't find root or anything.

 

Keep me posted and I'll watch for you to let me know when you get your new box up and running..and hopefully, a Mandrake solution to this RAID setup.

 

Have a great day!

zenarcher

Share this post


Link to post

Just wondering if blackpage has had a chance to try the SATA RAID configuration we were working on yet??

 

zenarcher

Share this post


Link to post

howdy zenarcher,

 

I informed you via PM that bits of the hardware for the new box have already arrived. If everything goes well, I should be ready willing and able to tackle the RAID issue over upcoming weekend. So stay tuned smile

 

cu

 

bp

Share this post


Link to post

Hey blackpage,

 

Thanks so much! I suppose I'd have had to check for PM's. smile I've never received one, so I suppose I didn't look!!!

 

I'll stay tuned!!

zenarcher

Share this post


Link to post

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
Sign in to follow this  

×