Monthly Archives: November 2015

WAIT!

Who wants to wait around for other people/things?

Defiantly not your cache server, and if it is waiting around, then there could be a problem somewhere.

See my case below:system1z.1daykern1z.1day
The server has only 4 CPUs, and its load average is almost double that.
Wow, why is there so much work happening on the CPU? What is could it be processing that is causing such high load?

Nothing. The Answer is ZERO.
It isn’t doing a thing, but it is rather, waiting.

If you take a look at the Kernal Usage graph, we see that we are waiting. A lot. This is due to the disks in this high-demand cache really not being able to fit the load its expected to do, serve a lot of content to a country of users.

Poor thing.

As it stands this box has 3 x 3TB Drives in it. Striped for ghaddagofast speeds. But it still isn’t enough. It would seem like the actual transfer rates are okay, and don’t seem to be a bottleneck, but rather the IOps they are expected to be able to serve.

This problem could be EASILY solved with 1 x SSD. Seriously easily, and a little $$.
But $$ isn’t always on your side. People like their $$$.

So we have to use what is around, which in this case is alot of servers and magnetic drives.

Cool, we can just pop some more HDDs into the server, cant we?
Wrong.

The server is 1U big with 3 bays as is. There is no space.

So now, we plan stuff. We need to make a plan. Or we scrap the project.

So we get 2 x of the Old servers.
These can take 3 x Drives each. We populate the bays.

We then install FreeNAS, because I quite enjoy using it and it sends me emails, which I quite like.
It also supports iSCSI, which we plan to use to share the drives over 2 x 1Gbps NICs.

iSCSI, which stands for Internet Small Computer System Interface, works on top of the Transport Control Protocol (TCP) and allows the SCSI command to be sent end-to-end over local-area networks (LANs), wide-area networks (WANs) or the Internet.

Will it work? I hope so. If it doesnt work after this, that really is kinda sucky.

Time will tell, right?

So you think you can NIC?

SometimesĀ in life, you get even bigger servers, with MOAR NICs then you can count on one hand.
At rAge we had 12 of the bad boys.

Here, we will use 5 in a server with 5 x 1 GbE on the Mobo, and an add in card with another 4 NICs.

We will only use 5 for now, as we dont wanna kill that switch so early in its lifetime!

This is what we are working with:

root@gamecache:/# networkctl
IDX LINK TYPE OPERATIONAL SETUP
1 lo loopback n/a n/a
2 ens7f0 ether n/a n/a
3 ens7f1 ether n/a n/a
4 ens4f0 ether n/a n/a
5 ens4f1 ether n/a n/a
6 ens5f0 ether n/a n/a
7 ens5f1 ether n/a n/a
8 enp12s0f0 ether n/a n/a
9 enp12s0f1 ether n/a n/a
10 enp16s0 ether n/a n/a

10 links listed.

That is 9 NICs, all 1GbE. So lets team up!

Here is what we would use in our interfaces file:

# The loopback network interface
auto lo
iface lo inet loopback

#Bond1
auto bond0
iface bond0 inet static
address 192.168.1.33
netmask 255.255.255.0
bond-slaves none
bond-mode 4
bond-miimon 100

#we will use port 1,2,3,4,A21 as we know they work

#Port 1
auto ens7f0
iface ens7f0 inet manual
bond-master bond0
bond-primary ens7f0 ens7f1 ens4f0 ens4f1 ens5f1

#Port 2
auto ens7f1
iface ens7f1 inet manual
bond-master bond0
bond-primary ens7f0 ens7f1 ens4f0 ens4f1 ens5f1
#Port 3
auto ens4f0
iface ens4f0 inet manual
bond-master bond0
bond-primary ens7f0 ens7f1 ens4f0 ens4f1 ens5f1

#Port 4
auto ens4f1
iface ens4f1 inet manual
bond-master bond0
bond-primary ens7f0 ens7f1 ens4f0 ens4f1 ens5f1

#Port 5
auto enp16s0
iface enp16s0 inet dhcp

#Add on 2 Port 1
auto ens5f1
iface ens5f1 inet manual
bond-master bond0
bond-primary ens7f0 ens7f1 ens4f0 ens4f1 ens5f1

#Add on 2 Port 2
#auto ens5f0
#iface ens5f0 inet dhcp
Quite a few NICs, and now we have 5Gbps at hand, with a total of 9 if we need in future.

Off to the DC we go!

The bigger scope of things.

Sometimes, we can use allll of the IPs, we don’t get the luxury of having an infinite amount available, and we need to make do with ONE IP.

Well, NGINX doesnt stress about that, we just need to set it up. Nginx will take a look at the Host Headers, and use those to determine what server the user is looking for.

So here is more-or-less how it works:

===
server {
listen www.domain1.com:80;
access_log /var/log/nginx/host.domain1.access.log main;
root /var/www/domain1;
server_name www.domain1.com;
xxx
}
}

server {
listen www.domain2.com:80;
access_log /var/log/nginx/host.domain2.access.log main;
root /var/www/domain2;
server_name www.domain2.com;
xxx
}
===

Now the important thing is the server_name.

Here, you put the redirected DNS hosts, and bob is your uncle… It works with the StreamCache setup perfectly.