1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.
  2. BeerCandle is a private community You may need to register to view the content of some threads.
  3. You are currently browsing an area which has guest posting enabled. Please be aware that any contributions as a guest will be moderated before public display.

MEIN NETZWERK

Discussion in 'Science & Technology' started by Private_Ale, May 28, 2012.

  1. Private_Ale

    Private_Ale King Neckbeard

    So get this: I've been reading around, and once I get Geneva operating again (second heatsink, want a new WD RE) I'm going to experiment with virtualizing the firewall/router. Instead of it being a physical unit with real hardware, I want to toy with making it a VM.

    The current hardware is old and inefficient. Geneva is much newer, not to mention has an overabundance of resources and over $1000 of processing power.

    I'm excited about this!!!
     
  2. Private_Ale

    Private_Ale King Neckbeard

    So based on my research, here's what you do:

    You set one bridged NIC, this is default regardless when you create a VM host. You create the VM that will house the router/firewall. For the router, you use IOMMU (VT-d) to direct the second NIC to the VM. This makes the NIC bare-metal as in the VM has direct and exclusive access to it, this NIC will act as your WAN interface. The bridged NIC will act as your LAN.

    Once the router VM is setup, you create a virtual NIC. This will be your "internal" NIC for the rest of the host machine. You can then un-bridge the LAN NIC and set it up also as an IOMMU (VT-d) interface so the VM also has exclusive access to it.

    So then: NIC 1 = WAN; NIC 2 = LAN; NIC 3 = INT (Virtual)

    That would be amazing. Having the firewall running on the same server as a VM. Running it on Xeon cores.

    Since I have 5 static IPs, I can continue to run my existing firewall and then start building and experimenting the VM router without interrupting my existing network.
     
  3. Private_Ale

    Private_Ale King Neckbeard

    So I'm going to get a network card for Geneva. The nice part is that both the motherboard and the card have the same exact controller, Intel 82576EB. Adding the card will bring the total NIC count to 4 (2 on board, 2 on card).

    The way I'm currently planning on setting it up is as follows:
    • eth3 will be the bridged port (aka INT). This will be the initial port in use when the system is set up.
    • eth0 will be the WAN port.
    • eth1 will be the LAN port.
    • eth2 will be the WLAN port.
    eth0,1,2 will be exclusive to the router VM using IOMMU / VT-d. eth3 will be shared on the system and will be the bridged port, it will be available to the router VM using VirtIO. eth3 will be the "internal" port. Once the system is configured, it will not have a physical connection.

    The way I figure it, instead of creating a virtual NIC for the internal traffic, I'm better off leaving a physical port bridged. This way, if for some reason the router VM becomes unresponsive, I still have a physical port into the system that's not exclusive to the router VM.

    Off the topic of NICs ...

    Here's something fun and exciting! pfSense allows you to use hardware crypto accelerators. Geneva has an on board AES-NI crypto accelerator. Technically, it's the processor that has it. So I can give the router VM passthrough access and have it exploit it's full potential!

    This is going to be cool.
     
    Last edited: Jul 29, 2014
  4. Private_Ale

    Private_Ale King Neckbeard

    Instead of getting a dual port 82576EB NIC, I think I'm going to get an I350-T4 based NIC which is a native quad port. It's also Intel's latest and greatest.

    If I do that, I'll disable the onboard NICs and run everything off the card.
     
  5. Private_Ale

    Private_Ale King Neckbeard

    Captain's Log:

    The operation to double the Geneva 5600's operating capacity was a huge success. The procedure and conditions were surgical. Mission Complete. See log files below for details.

    IMG_20140731_160818_241.jpg IMG_20140731_161219_863.jpg IMG_20140731_162439_730.jpg IMG_20140731_162539_662.jpg IMG_20140731_162609_668.jpg
     
  6. Private_Ale

    Private_Ale King Neckbeard

  7. Private_Ale

    Private_Ale King Neckbeard

    Updated the BIOS on Geneva to 2.1c and the IPMI firmware to 3.16.

    Factory fresh!

    IMG_20140802_121015_519.jpg IMG_20140802_121400_672.jpg IMG_20140802_123220_381.jpg

    I'm beginning to like how efficient MS-DOS is at BIOS tasks.
     
  8. Private_Ale

    Private_Ale King Neckbeard

    Once I bring Geneva back online as the box-of-all-trades, I'm going to reorganize the rack. I'm planning on boxing up the old Ripto firewall and LOLserver, then moving all of the network equipment directly under Geneva.
     
  9. Private_Ale

    Private_Ale King Neckbeard

    Finally going to order the network card today.
     
  10. Private_Ale

    Private_Ale King Neckbeard

    Wow! I managed to snag a brand new Intel I350-T4 for cheaper than the OEM cards.

    I was originally going to get a cheap unbranded I350-T4 from China (literally, shipped from china), but I found a new fully branded card for a little less! I still can't believe it. First I get a E5645 for 4x cheaper than retail, and now I get this for so cheap.

    The tech gods have been looking over me lately.
     
  11. Private_Ale

    Private_Ale King Neckbeard

    I was originally wanting to replace the existing drives with WD Re drives, but they're kind of a shitton expensive and the reviews surprisingly aren't that hot for enterprise-grade drives.

    Now I'm considering Seagate's enterprise line, Constellation. They're slightly cheaper, have better reviews, and the same 5-year warranty with 24/7 rated use. Also have an unheard of 128MB of cache/buffer.
     
  12. Private_Ale

    Private_Ale King Neckbeard

    IMG_20140827_172344_162.jpg IMG_20140827_191854_485.jpg
     
  13. Private_Ale

    Private_Ale King Neckbeard

    I'm now considering replacing pfSense with Sophos.

    I still need to compare them further, but Sophos looks solid.
     
  14. Private_Ale

    Private_Ale King Neckbeard

  15. Private_Ale

    Private_Ale King Neckbeard

    I mounted my little 5-port gigabit switch to the rack. That was stressful. I mounted it using the side mounting holes on the rack, however being a little 5-port switch, there are no holes for rack mounting.

    What I ended up doing was disassembling the switch, installing the base of the switch onto the side of the rack using the wall mounting holes (they actually lined up decently!) and then rebuilt the switch. Luckily there were no screws on the bottom, so it was straight-forward with reassembling it while mounted.

    I'll take pics tomorrow.

    Once Geneva is online again I'm going to retire the hardware firewall (Ripto) and virtualize everything. At the same time, I want my internal network to be gigabit. I can't afford (and don't need) a big rackmounted gigabit switch since they only come 16-ports and up. So the little 5-port will be more than enough for LAN1 since LAN2 will have it's own switch.
     
  16. Private_Ale

    Private_Ale King Neckbeard

    ForumRunner_20140921_220749.png
     
  17. Private_Ale

    Private_Ale King Neckbeard

    2sexi4me

    IMG_20141029_133005_340.jpg
     
  18. Private_Ale

    Private_Ale King Neckbeard

    OK so from Monday onward I will be dedicating myself to getting this to work.

    I have eth0, eth1, eth2, eth3. eth0 is what I will use as WAN, eth1 will be LAN, eth2 will be WLAN. As of now, eth3 will probably go unused until I get an IP camera system up, then eth3 will be for the CCTV system.

    Here's my current plan of action:

    I will set up a new KVM system using eth1 as the main interface. Once KVM is up and running, I will create my first VM. This VM will either host Proxmox or Sophos. To this VM I will assign two interfaces; the first interface will be eth0 as a dedicated interface, the second interface will be br0, or in other words, eth1 as a bridge (or shared interface). Once my VM containing the firewall is successfully set up, I will add a third interface, virbr0, which is the default KVM virtual bridge. This is the bridge that will connect the other VMs to the network through the virtual adapter.

    MyPrettySeahorse-PatrickwithNailandBoardonHead.jpg
     
  19. Private_Ale

    Private_Ale King Neckbeard

    Screenshot-localhost Connection Details.png
     
  20. Private_Ale

    Private_Ale King Neckbeard

    OK!

    My firewall is now fully virtualized. Jesus that was a learning type thing. Took me all day, so I ended up installing pfSense since I know how to configure it easily, however I still want to use Sophos, so tomorrow I'll do it again now that I know how to properly do it.

    It's actually really neat! You use Intel IOMMU to pass devices to the virtual machine, as far as the host system is concerned, that device no longer exists once the VM is assigned it. You can even do a full passthrough of the processors.

    I did manage to install Sophos, but it has SOOOOOO many options. It's really cool. It's too much for me to soak in today though.

    So I have pfSense set up for now. 2 physical NICs, 2 virtual NICs. works fantastic.

    Here are some pics:
    screenshot.877.png screenshot.879.png screenshot.878.png

    Here's how to give full passthrough of PCI devices on Linux/KVM:
    http://docs.fedoraproject.org/en-US...ualization-PCding_a_PCI_device_to_a_host.html
     
  21. Private_Ale

    Private_Ale King Neckbeard

    IMG_20141108_191514_142.jpg
     
  22. Private_Ale

    Private_Ale King Neckbeard

    I'm going to fill that 4th port, even if it kills me.
     
  23. Private_Ale

    Private_Ale King Neckbeard

    I do have to admit though, the very little that I played with Sophos, it's was freakin' awesome. SOO MANY BUTTONS. And it has a country filter built in! It has built-in virus scanning! Website Blocking!

    Get this! It even emails you about stuff!! :eek:
     
  24. Private_Ale

    Private_Ale King Neckbeard

    So pretty.

    IMG_20141109_115352_588.jpg IMG_20141109_115352_670.jpg IMG_20141109_115352_968.jpg

    The fading red strobe is the RAID card's heartbeat, the orange and green lights are the NIC's port lights (orange link, green active), the other flashing green is the IPMI's heartbeat.
     
  25. Private_Ale

    Private_Ale King Neckbeard

    Did you know: even though Geneva is a single server, it's technically two nodes. Two NUMA nodes!

    Code:
        <topology>
          <cells num='2'>
            <cell id='0'>
              <memory unit='KiB'>12361904</memory>
              <cpus num='12'>
                <cpu id='0' socket_id='0' core_id='0' siblings='0,12'/>
                <cpu id='1' socket_id='0' core_id='1' siblings='1,13'/>
                <cpu id='2' socket_id='0' core_id='2' siblings='2,14'/>
                <cpu id='3' socket_id='0' core_id='8' siblings='3,15'/>
                <cpu id='4' socket_id='0' core_id='9' siblings='4,16'/>
                <cpu id='5' socket_id='0' core_id='10' siblings='5,17'/>
                <cpu id='12' socket_id='0' core_id='0' siblings='0,12'/>
                <cpu id='13' socket_id='0' core_id='1' siblings='1,13'/>
                <cpu id='14' socket_id='0' core_id='2' siblings='2,14'/>
                <cpu id='15' socket_id='0' core_id='8' siblings='3,15'/>
                <cpu id='16' socket_id='0' core_id='9' siblings='4,16'/>
                <cpu id='17' socket_id='0' core_id='10' siblings='5,17'/>
              </cpus>
            </cell>
            <cell id='1'>
              <memory unit='KiB'>12384736</memory>
              <cpus num='12'>
                <cpu id='6' socket_id='1' core_id='0' siblings='6,18'/>
                <cpu id='7' socket_id='1' core_id='1' siblings='7,19'/>
                <cpu id='8' socket_id='1' core_id='2' siblings='8,20'/>
                <cpu id='9' socket_id='1' core_id='8' siblings='9,21'/>
                <cpu id='10' socket_id='1' core_id='9' siblings='10,22'/>
                <cpu id='11' socket_id='1' core_id='10' siblings='11,23'/>
                <cpu id='18' socket_id='1' core_id='0' siblings='6,18'/>
                <cpu id='19' socket_id='1' core_id='1' siblings='7,19'/>
                <cpu id='20' socket_id='1' core_id='2' siblings='8,20'/>
                <cpu id='21' socket_id='1' core_id='8' siblings='9,21'/>
                <cpu id='22' socket_id='1' core_id='9' siblings='10,22'/>
                <cpu id='23' socket_id='1' core_id='10' siblings='11,23'/>
              </cpus>
            </cell>
          </cells>
        </topology>
    
    Geneva has two CPU sockets and two memory banks. Each is considered a separate node even though they're one system. With HTT enabled, each single core has two threads. That's what the "siblings" are. For example, CPU ID 0 is a real core and CPU ID 12 is a HTT core (etc). With that in mind, as you can see, both 0 and 12 are siblings residing on the same core.

    When creating virtual machines on a NUMA host, it's important to make sure you restrict the VM to a single NUMA node. To do this, we use pinning. Pinning .. pins a VM to a specific processor core. For instance, if we create a VM on Node 0 and we want a topology of 2 cores and 4 threads, we can pin it to CPUs 3,15 and 5,17. If we want a topology of two cores and no threads, we can pin it to 3,5 (etc).

    Although a VM without pinning can freely float between NUMA nodes, it's generally not recommended since it will get a slight performance hit. If a VM is currently executing on 11,23 (Node 1), but a piece of information is in Node 0's memory bank, 11,23 will need use the Quick Path Interconnect to access the other memory. This operation will be slower than if it was accessing it's own memory bank. This is why it's a good and recommended practice to pin VMs to specific CPU IDs.

    Geneva has two 6 core 12 thread processors, each with a separate triple-channel 12gb memory bank. With this in mind, I can now create VMs with both performance and limits in mind. I wouldn't want to assign more than 12gb to VMs per node, even though technically there is 24gb in the total pool. I also wouldn't want to mix and match processors. Even though there are technically 12 cores and 24 threads, I wouldn't want to put two cores for two separate nodes on the same VM. However, if I did, I could tell the VM that. I could tell the VM that it is being assigned resources from two sockets and in turn, it would tell the VM that too. That way internally, the VM would more properly handle it's resources, it in itself would be a NUMA host. Although telling the VM that it's a NUMA host (sockets 2 cores XX threads XX) would yield better performance than letting a VM float freely between the two nodes, it would still have slightly less performance than a VM restricted to a single node.

    Source:
    https://access.redhat.com/documenta...rise_Linux/5/html/Virtualization/ch33s08.html
    http://en.wikipedia.org/wiki/Non-uniform_memory_access

    Don't mind me. I just like writing down things I learn. It helps me remember.