cancel
Showing results for 
Search instead for 
Did you mean: 

LINUX Install

Zygomorphic
Level 17
Hello again, everyone. It seems that we haven't discussed LINUX here in a while. I thought it is about time to change that, since summer is here, and I am nearly done rebuilding my laptop. Considering what I said above, I figured I would go ahead and share my experiences with the system.

I had mentioned previously that I was planning on putting Windows in a VM, and going Penguin all the way. :)Well, my friends, that has happened, and I am running LINUX Mint 14 on my Seagate Hybrid drive (15 sec boot time :)). I have Windows 7 in a VM now, and it seems happy there, and is easier to manage than Windows on a drive with LINUX. I have to say, getting rid of Windows made things easier, since only Windows doesn't respect other people's bootloaders, and is arrogant enough to think that it is the only OS you want. :mad:

I did some research on the best partitioning scheme for LINUX, and which filesystems to use for which partitions, and so this is the partitioning scheme that I chose, given that I am planning on having multiple VMs on this machine, so the maximum storage is ideal.
/dev/sdb2 /boot 255 MB ext2
/dev/sdb3 Extended partition
/dev/sdb6 / 50 GB ext4
/dev/sdb7 /home 418 GB ext4
/dev/sdb5 swap 32 GB swap

I have 16 GB of RAM, and so went with the rule-of-thumb 1.5-2x RAM for swap space. I don't tend to hibernate my system, but if I ever do, I want the option of doing so. I could probably have gone with 16 GB, and if I were more pressed for space, I would have.

I've heard that some suggest splitting /boot and the / partitions, so I decided that it can't hurt, and it can keep some of my other stuff separate. If you guys don't think that this is necessary, I'd like to know that for the future.

I'm a firm believer in separating the /home partition from everything else, especially since I sometimes change LINUX distros, and this allows me to preserve all my files. 🙂 For someone who just wants to try LINUX out, I go with the simple partitioning scheme of (/ and swap), but since I use LINUX for my daily tasks, I wanted a better system.

TODO:
* Install nVidia drivers to obtain better graphics performance and power management.
* Download LINUX kernel sources and learn more about building LINUX kernels.
* Try out other distros and configs in VM - more learning.
* Upgrade to LINUX Mint 15 when it comes out.

Any thoughts and/or suggestions? I'm happy to have feedback, and would like people's thoughts. I'm particularly interested in starting a flame war about distros, as well as filesystems. :cool:
I am disturbed because I cannot break my system...found out there were others trying to cope! We have a support group on here, if your system will not break, please join!
http://rog.asus.com/forum/group.php?groupid=16
We now have 178 people whose systems will not break! Yippee! 🙂
LINUX Users, we have a group!
http://rog.asus.com/forum/group.php?groupid=23
632 Views
141 REPLIES 141

powerhouse wrote:

My striped LVM data disks (currently 2x 2TB HDDs) hold files ranging from ~250KB (small jpegs) to ~20MB (RAW camera files) to ~60-80MB (TIFF). It seems that write performance doesn't suffer to bad when using LVM versus RAID0.


Indeed. You could still benefit a lot from a hardware RAID controller though due to cached writes and due to redundancy. Also hardware RAID doesn't have some of the limitations of software RAID. More on this below.


I think we both have totally different usage patterns:


Indeed:)


1. Yours, from what I gathered through another post of yours, is that you run a business and replace/sell hardware regularly to fit new needs, or to make sure you don't run into troubles with old hardware.
Also, it seems that running multiple OSes will require your disk I/O to cope equally well with small and large files, and it's easy to see how you and your business benefits from a well-performing virtualized Windows test bench environment based on Xen.


Considering I'm an indie developer, I run a very tight ship. Meaning I only upgrade when performance is needed and the cost is justified. Hardware for me is a business asset first and any recreational uses come secondary:) For example I would never buy a X chip as the additional CPU cache is not worth the double price for my particular usage. If I was building a render farm though, I would (although I would go with Xeons+Tesla in that case heh). As long as the hardware pays back, I'll upgrade. 🙂


2. My PC platform grows "naturally". When I built my current PC, I reused most of my drives from the old PC and added only a SSD (for Linux OS and the Windows guest OS) and another HDD for data (the second HDD in the striped LVM volume). In the meantime I had to add another HDD.
The advantage of LVM - to me - is that I can add HDDs and grow file systems when needed. NOT using RAID frees me from the need to get identical disks. So my PC holds a 120GB SSD, a 500GB drive, a 1TB drive, and 3x 2TB drives. Of course if I were to buy disks now I would go for 3TB drives (best price/performance ratio and - by now - mature technology).
The only thing that I'm not yet settled on is the backup method(s) - currently I use several different methods for different data and purposes: Backup on import of photos (Lightroom functionality), internal backup of OS and VM using LVM snapshot with dd and pigz, backup to external HDDs using the disk's backup utility under Windows, rsync-type backup to my media PC. It's a bit of a mess, I admit.

Considering the performance and ease of use, I will probably standardize on rsync for data backup (I use luckybackup with ssh for that), and move some disks to my media PC/server in the cellar.


Exactly this is the advantage of LVM. The flexibility I was talking about. Though do notice that quality hardware RAID controllers will let do you do a few tricks with disks. For example using 3 arbitrary drives like a 2x 1TB and one 500GB, you could build a 500GB RAID5 (on 3 disks) AND a 500GB RAID0 with the leftover from the 2 1TB drives, or you could use the extra space as JBOD. Quality hardware controllers give you a form of flexibility AND several other features such as RAID level migration, growing/shrinking etc etc. They are not as flexible as plain LVM but they're pretty close. The downside? They're expensive (at least quality ones..LSI, Areca/Tekram, etc).

And your backup "hell" brings me to the other great benefit of RAID. Redundancy. I do not use RAID0 as most people do. RAID0 for me for anything other than temporary/scratch disks or cache storage is useless. I am not interested in adding points of failure in my systems.
RAID is all about redundancy for me and the performance I also get out of it, is a bonus. Managing TB of backups is utter hell. RAID5 gives you a decent level of redundancy and 6 even more by sacrificing 1 or 2 drives. This keeps you "operating" unless you suffer a catastrophic failure. It's not a substitute for proper backups but it certainly reduces the amount of data you need to backup and the frequency.

Everything critical should be backed up with multiple copies. But non-critical data are fine with RAID5 redundancy. I will mind if I lose entire projects. But the space and resources required to backup those is minimum. VMs or OS and app installations etc etc, is nothing that can not be replaced. RAID5 though keeps you working even if one drive fails. RAID6 with 2. So you avoid downtime and backups of insignificant data. LVM striping is just as bad as RAID0 as far as I'm concerned. Any situation that results in complete data disaster in case of a drive going bad is absolutely horrible. I can not backup 12TB of data regularly just in the case that something like this may happen. I can backup 1-2TB and use RAID5 to make it really hard to fail. If the case happens that 2 drives simultaneously go bad without any previous signs of failure, which is statistically improbable unless you're operating a data center, then I have my critical data and can rebuild. But this will be a very rare case IF it ever happens. While having a single drive failing out of 3 is not that hard now, is it?

Zygomorphic wrote:
Me, I feel like a moron...Thanks @Nodens! 🙂 Actually, I've played with Xen before, never got it running right, so I may have to try it again, just to see if I can get it working, since it looks like such a great idea. However, I'm on a laptop, so it may not be so useful. The other thing I may have to play with is LINUX on my Android phone...


Years ago Xen was much harder to use. Nowadays Zyg, it's very easy specially for an experienced Linux user such as yourself and with PCI passthrough (on a laptop that features an IGP as well) you can forget ever booting Windows on baremetal again 😉
RAMPAGE Windows 8/7 UEFI Installation Guide - Patched OROM for TRIM in RAID - Patched UEFI GOP Updater Tool - ASUS OEM License Restorer
There are 10 types of people in the world. Those who understand binary and those who don't!

RealBench Developer.

I agree, @Nodens, data backups are important. Loved the part about operating a data center (which, thank heaven, I don't). I keep backups of everything important, that I don't want to risk losing, and I don't bother with LVM, since it is just asking for more trouble than its worth.
Nodens wrote:
Years ago Xen was much harder to use. Nowadays Zyg, it's very easy specially for an experienced Linux user such as yourself and with PCI passthrough (on a laptop that features an IGP as well) you can forget ever booting Windows on baremetal again 😉

@Nodens, I wish that I was as experienced as you think I am. I've used LINUX a bit, and I know my way around OS's in general, but as far as the nitty gritty of system configuration, that's not me.

I believe what you say is true, @Nodens, and I am going to have to try that, since it would be really awesome. 🙂 Now can you run an existing installation that is on a separate partition from within Xen? That would be the first step towards cleaning the Windows...:cool:
I am disturbed because I cannot break my system...found out there were others trying to cope! We have a support group on here, if your system will not break, please join!
http://rog.asus.com/forum/group.php?groupid=16
We now have 178 people whose systems will not break! Yippee! 🙂
LINUX Users, we have a group!
http://rog.asus.com/forum/group.php?groupid=23

powerhouse
Level 7
Ok, you beat me, Nodens :). I'm just a bloody amateur.

By the way, how do HDDs/SSDs passed through to VM compare with LVM? Done any comparisons?

And how about LVM stripe versus RAID0?

Nodens
Level 16
Passing through the physical drive is always faster and it's a lot more noticeable in random I/O, hence a lot more noticeable in mechanical drives than SSDs. Considering the volume sizes you need to run a multitude of VMs, usually they end up on mechanical drives due to the cost of SSDs. The LSI Cachecade solution to cache a mechanical RAID array on an SSD array is as good as it gets performance-wise for this particular use (VMs). I use this particular setup I mentioned with an LSI 9260 soon to be replaced with a 9271:) I extensively use VMs for testing software I develop on different versions of Windows and sometimes different setups (saves a lot of time to have OS images preconfigured for various case scenarios).

mdadm software RAID0 is faster, again considerably on random I/O. LVM striping is more flexible though as you can do with LVM a lot of things that you can't do with RAID. This is why using LVM on top of RAID is a common practice in IT. You set up a RAID array and then do volume management on it with LVM (without striping of course). Hardware RAID is a whole other level because of cache and dedicated processor for XOR logic function (for parity RAID levels 5, 6).

Here, check this out. It's somewhat dated but it'll give you a perspective on the performance difference:http://www.linux-mag.com/id/7582/2/
RAMPAGE Windows 8/7 UEFI Installation Guide - Patched OROM for TRIM in RAID - Patched UEFI GOP Updater Tool - ASUS OEM License Restorer
There are 10 types of people in the world. Those who understand binary and those who don't!

RealBench Developer.

powerhouse
Level 7
Thanks Nodens - very helpful!

Edit: I read the article you linked to, very informative, particularly the 1st section comparing LVM striped with RAID0.

My striped LVM data disks (currently 2x 2TB HDDs) hold files ranging from ~250KB (small jpegs) to ~20MB (RAW camera files) to ~60-80MB (TIFF). It seems that write performance doesn't suffer to bad when using LVM versus RAID0.

I think we both have totally different usage patterns:

1. Yours, from what I gathered through another post of yours, is that you run a business and replace/sell hardware regularly to fit new needs, or to make sure you don't run into troubles with old hardware.
Also, it seems that running multiple OSes will require your disk I/O to cope equally well with small and large files, and it's easy to see how you and your business benefits from a well-performing virtualized Windows test bench environment based on Xen.

2. My PC platform grows "naturally". When I built my current PC, I reused most of my drives from the old PC and added only a SSD (for Linux OS and the Windows guest OS) and another HDD for data (the second HDD in the striped LVM volume). In the meantime I had to add another HDD.
The advantage of LVM - to me - is that I can add HDDs and grow file systems when needed. NOT using RAID frees me from the need to get identical disks. So my PC holds a 120GB SSD, a 500GB drive, a 1TB drive, and 3x 2TB drives. Of course if I were to buy disks now I would go for 3TB drives (best price/performance ratio and - by now - mature technology).
The only thing that I'm not yet settled on is the backup method(s) - currently I use several different methods for different data and purposes: Backup on import of photos (Lightroom functionality), internal backup of OS and VM using LVM snapshot with dd and pigz, backup to external HDDs using the disk's backup utility under Windows, rsync-type backup to my media PC. It's a bit of a mess, I admit.

Considering the performance and ease of use, I will probably standardize on rsync for data backup (I use luckybackup with ssh for that), and move some disks to my media PC/server in the cellar.

IM2L844
Level 12
Today I put Precise Puppy 5.7.1 on a 16 GB SanDisk Extreme 3.0 that I found for $28 @ B&H and I have to say that I am thoroughly impressed with the speed. It loads to RAM in seconds and afterward everything is nearly instantaneous. We'll see if I'm still impressed overall after I drive it around for a couple of weeks. The jury is still out, but I really like the idea of being able to carry everything I might need around with me in my pocket if I want to.

IM2L844 wrote:
Today I put Precise Puppy 5.7.1 on a 16 GB SanDisk Extreme 3.0 that I found for $28 @ B&H and I have to say that I am thoroughly impressed with the speed. It loads to RAM in seconds and afterward everything is nearly instantaneous. We'll see if I'm still impressed overall after I drive it around for a couple of weeks. The jury is still out, but I really like the idea of being able to carry everything I might need around with me in my pocket if I want to.

I hadn't thought about using Puppy, but due to its smaller size, I might. I pretty much always have a USB flash drive with LINUX on it so that I can boot it wherever I go, but it takes a while to load, probably due to the larger distro coupled with the slower USB interface.
I am disturbed because I cannot break my system...found out there were others trying to cope! We have a support group on here, if your system will not break, please join!
http://rog.asus.com/forum/group.php?groupid=16
We now have 178 people whose systems will not break! Yippee! 🙂
LINUX Users, we have a group!
http://rog.asus.com/forum/group.php?groupid=23

IM2L844
Level 12
I hadn't thought about using Puppy, but due to its smaller size, I might. I pretty much always have a USB flash drive with LINUX on it so that I can boot it wherever I go, but it takes a while to load, probably due to the larger distro coupled with the slower USB interface.


It's rockin' so far. Timed it this morning...16 seconds to load, but I don't know how much of that is due to the 3.0 flash drive. 2.0 might take a couple seconds more, but that wouldn't be anything to really bitch about. Once it's in RAM, it's freaky fast on my system. It's definitely worth giving a whirl. I like playing with new toys though and after a couple of years of using various distros I'm still not set in my Linux ways so maybe it's just worth it to me. 😉

IM2L844 wrote:
It's rockin' so far. Timed it this morning...16 seconds to load, but I don't know how much of that is due to the 3.0 flash drive. 2.0 might take a couple seconds more, but that wouldn't be anything to really bitch about. Once it's in RAM, it's freaky fast on my system. It's definitely worth giving a whirl. I like playing with new toys though and after a couple of years of using various distros I'm still not set in my Linux ways so maybe it's just worth it to me. 😉

Yeah, that's what I was figuring, based upon my booting taking closer to 30 seconds. I'm willing to be it's my older USB 2.0 flash drive that's causing it. My USB 3.0 is faster, but USB 3.0 is unstable on my G53SX (always has been).
I am disturbed because I cannot break my system...found out there were others trying to cope! We have a support group on here, if your system will not break, please join!
http://rog.asus.com/forum/group.php?groupid=16
We now have 178 people whose systems will not break! Yippee! 🙂
LINUX Users, we have a group!
http://rog.asus.com/forum/group.php?groupid=23

Nodens
Level 16
I always carry a 2 usb flash drives with me. One with windows tools in general that also boots Free DOS (includes flashing tools etc etc) and one with Kali Linux (for on the fly security auditing) and Knoppix (for data recovery).
RAMPAGE Windows 8/7 UEFI Installation Guide - Patched OROM for TRIM in RAID - Patched UEFI GOP Updater Tool - ASUS OEM License Restorer
There are 10 types of people in the world. Those who understand binary and those who don't!

RealBench Developer.