Thing is that the output does show the 4 channels. To many variables, I guess. What’s new New posts New resources Latest activity. Thanks for notifying me about that bug. However, it did not work for me Then noticed your post turned into a giant rant about Marvell controllers. You should have seen my jaw drop.
|Date Added:||17 August 2017|
|File Size:||47.26 Mb|
|Operating Systems:||Windows NT/2000/XP/2003/2003/7/8/10 MacOS 10/X|
|Price:||Free* [*Free Regsitration Required]|
I also consider a kernel bug to be a vendor fault anyway.
Adventures in I/O Hell
Mon Mar 24, 4: I will check to make sure it’s plugged in correctly, etc. The drives plugged into slots 3 and 4 don’t show up.
Unknown device 11ab Flags: Sat Mar 22, 2: What’s new New posts New resources Latest activity. Sat Mar 22, 8: Marvell 88SX[56 ] 0[48 ] 1 libata progress? I benchmarked the living hell out of everything from 3Ware lijux Areca hardware RAID to nvidia, promise, and highpoint fakeraid to FreeBSD raid to Linux raid a month or so ago, and Linux kernel raid stomped the living hell out of everything.
Marvell 88SX7042 controller only shows 2 drives instead of 4
Liunx hardware issue . Most importantly, no other disks failed at any stage following the move from the Marvell to the SiI controller. Marvell doesn’t provide a lot of information for these chipsets on their website.
I anticipated just needing to arrange the replacement of one disk and being able to move on with my life. Sun Apr 27, The regressions are in kernel 2. Linux, OTOH, gives very massive performance gains on RAID1 assuming you have at least as many processes reading concurrently as you have physical drives in the array.
Any clues on this?????? Which is not the point. Linkx lost 3x disks, it was unclear as to what the root cause of the failure in the array was at this time. Ok that’s more of what I expected, and that’s pretty damn awesome.
[zfs-discuss] Multiple SATA controllers and ZFS on Linux
But fakeraid will SUCK performance wise, and hardware raid will not be worth the price point and in general likely won’t perform anywhere near as well as the kernel raid anyway, at this level. Thu Apr 10, 5: And even if I plug in the worst possible, spec-violating SATA drive to a controller, that controller should be able to handle it properly and keep the disk isolated, even if it means disconnecting one bad disk.
Wow that is really bad. I gave up on the Highpoint and bought the Supermicro that was mentioned here.
SATA hardware features – ata Wiki
Depressingly almost as soon as the hardware was changed and I started my RAID rebuild, I started experiencing problems with the system. Sun Apr 13, Wed Mar 19, 6: Alternately, anyone have any experience with the Adaptec SA under Linux? I was getting pretty tired of having to squeeze all my stuff into only a few TB, so wanted a fix no matter what. It seemed unlikely that a low level physical device could be the cause, but I wanted to eliminate all common infrastructure possibility, and all the bad disks tended to be in this enclosure.
I have that highpoint and it works in most distros but the promise or sil cards have drivers as well. Wed Mar 19, 4: Without digging into the Marvell driver code, it looks very much like the hardware is failing in some fashion and dropping the disks.