View: 410|Reply: 2

FreeNAS

[Copy link]

11610K

Threads

12810K

Posts

37310K

Credits

Administrators

Rank: 9Rank: 9Rank: 9

Credits
3732793
2-12-2019 04:41:15 Mobile | Show all posts |Read mode
Do we have any FreeNAS users?

I’ve read the documentation, looks good. Any serious downfalls not mentioned?

I’d be using it to provide storage to my two ESXi hosts to run a dozen or so virtual machines.
Reply

Use magic Report

11610K

Threads

12810K

Posts

37310K

Credits

Administrators

Rank: 9Rank: 9Rank: 9

Credits
3732793
2-12-2019 04:41:16 Mobile | Show all posts
I don't know FreeNAS, but from memory it's based on ZFS which has been around for a while and is also used in enterprise scale systems. ZFS (I run it on my SOHO server) offers some interesting "goodies" such as snapshots, replication and scrubbing. And of course ZFS "reason d'etre" is to "not loose data" so it has error checking up the wazzoo and a "copy on write" paradigm.

I can say something about running vmware onto storage as I used to look after a vmWare cluster backed onto a NetApp Filer. Surprisingly, both vmWare and NetApp recommended not using
iSCSI for the storage-host interaction; they recommended using bog standard NFS shares for the vmdk stores. (NFS being stateless versus SMB/CIFS which is a "chatty" and stateful protocol.) Certainly it was a lot easier to implement than mucking about with iSCSI Initiators and so forth and made Storage vMotion simple to implement (ie it "just worked" without doing anything special.)

Ideally, you'd have a stand alone "private" LAN between the vHosts and storage (ie build a SAN) though ours had been implemented on a separate VLAN, which does the more or less the same thing though there is still competion between VLAN's for bandwidth. Obviously, you'll need vHosts and storage with multiple NIC's or VLAN capable NIC's to do that. They also recommended using jumbo frames on the storage LAN/VLAN as it mean each block read/write could be carried in a single (JF) frame rather that split over multiple "standard" sized ethernet frames which negates fragmentation/reassembly and is of course faster.

They also recommend "aligning" the OS volumes on 4K boundaries for similar reasons of mitigating block to storage read/write fragmentation, but IIRC all Windows OS's after something like Server 2008 do this by default. Not sure about the many *NIX variants.

Finally, on my NetApp, with multiple vGuests running the same OS, we got some really impressive results using "de-duplication" on file stores - often better than 50%. I've never had a play with that on ZFS, but it offers similar de-duplication technology, albeit that is is said to require a lot of or RAM.
Reply Support Not support

Use magic Report

11610K

Threads

12810K

Posts

37310K

Credits

Administrators

Rank: 9Rank: 9Rank: 9

Credits
3732793
 Author| 2-12-2019 04:41:17 Mobile | Show all posts
Excellent. And yes i was leaning towards NFS rather than iSCSI. I’d be fitting plenty of RAM and can run dual NIC no problem. I’ll have to move / swap some hardware around, but overall i think it’ll reduce the number of disks i’ve currently got spinning (32) to a slimmer 16. All OS of VMs are Windows 2016 and higher for Windows and CentOS 7 on the Unix side.

I’ll probably build a small test FreeNAS server and have a play with it.
Reply Support Not support

Use magic Report

You have to log in before you can reply Login | register

Points Rules

返回顶部